As some of you may know, for some months now I've been digging into automated testing. Unit testing, Selenium testing, coverage tools, specification testing, I'm lovin' it. But the tools aside, I have come across one problem which I've both seen first hand and heard plenty about second hand. Testers are not developers, nor are developers testers! You can't sit your testers in front of a blank visual studio solution and expect them to fly, and on the other hand your devs don't want to test, they want to be off building cool stuff! So what's the solution?
What To Do? - Build Testability Into Your Product
The first and worst hurdle I've seen has been an automation-unfriendly UI. Assuming you have a tester who can code, and groks the testing framework, the last piece of the puzzle is the product under test. If your site has no clear IDs or classes on your links, and you want your testers to assert against text that's actually embedded in images, you might have problems.
When you introduce unit tests into code, your coders accept that to get the most out of unit tests, they have to make some changes to the way they code. When you introduce automation testing into your UI, you have to prepare for the same upheaval. At a minimum you'll need to be aware of the following:
- Give your testers as much ability to make UI changes as anyone else, give them access to a dev who can prioritise their requirements so that there's no time wasted doing ninja workarounds or flaky hacks.
- You WILL have to add some more IDs or classes to elements that didn't previously have them, so that they can be easily addressed via a nice predictable XPath or CSS-style selector.
- You will have to consider whether or not any nifty Ajax or Flash widgets on your site are testable, and if not what to do about them.
- Some browser pop-ups, for example security prompts, can kill automation frameworks. Make sure you can suppress these in your test environment.
What Else? - Kill Off Selenium IDE as Quickly as Possible!
This is a pet peeve of mine - Selenium IDE is a Firefox plugin that generates basic Selenium scripts while you navigate a site, and these scripts can then be replayed later. For a basic one-off test, great. For tests that have to stand the test of time, not so great. If you're going to follow my next piece of advice, and you want to offer your testers an automation API, Selenium IDE can become a royal pain in the arse.
Once a tester has seen how easy it is to generate mediocre scripts using Selenium IDE, expect to pry it from their cold dead hands before they'll go back to hand-cranking tests, no matter how sweet the code API you give them. Your testers may not be aware, or worse may not care, that their mediocre Selenium scripts will guaranteed become a maintenance nightmare.
And Then What? - Build an Automation API
Once you're confident that your code is testable, the next piece of the puzzle is your tester's knowledge of general programming. My manager @CarlPhillips007, who knows one helluva lot more about how people tick than I do, makes the point that devs and testers have chosen their profession for a reason - testers have seen dev world, and they still choose to be testers. From my own perspective, you try and turn me into a tester, you won't see me for dust! Therefore, if I have developers and testers with time to spend together, I'd recommend spending that time making my tester's life easy and productive, not trying to turn that tester into a dev.
How to give your tester an easy life? There are a few ways this can be done. At one of the scale, you can solve small individual problems for your testers, for example instead of expecting your testers to know the intricacies of XPath, have your devs create XPath builder classes at the same time as the HTML is built. Your testers can then address page content in plain ol' C#, getting the full benefit of the IDE's intellisense, and you've got another level of isolation between the test code and the site design.
At the other end of the scale, you build a set of wrapper classes around your whole application, that represent each of the tasks that a user would perform on your site. For example you might have a "SiteNavigation" class that contains methods such as "SelectCategory", "SelectBreadcrumbLink", "SearchFor", etc. Or you might have a "ProductBrowser" class that contains methods such as "SelectPriceRange", "SelectProductByName", "SelectProductByIndex", "GetNoOfMatchingProducts", etc.
A Selenium Testing Case Study - The Wrong Way
A tester is brought into the business, given a copy of Visual Studio, the Selenium plugin, and access to our QA sites and our developers. Our developers spend some weeks teaching the tester the basics of how we want the automation tests written, and hand over to the tester. Some months later, the tester has produced integration tests which work well, they give confidence that the application under test is working as it should, but progress seems to be a little slow. But at first glance, things are going well.
Another few months passes, and it's decided that to speed up the automation work, there needs to be much better integration between the testers and the developers, so a developer sits down with the tester to review the code.
Yowee! The tests live in a single ProductName.cs file with 8000-odd lines of code, and 50-odd class-level variables. This same class includes code to manipulate the database for certain scenarios, code to read and write test scenarios to a spreadsheet, there's a week's worth of refactoring to do, an unhappy tester watching his tests getting ripped apart, and we've still got the problem of making sure we don't wind up in the exact same position in another few months.
A Selenium Testing Case Study - The Right Way
A tester is brought into the business and given access to our QA sites and our developers. The tester spends some days with our developers working out the test API, the developers then work for the testers to provide test APIs for each of the features that the tester has identified need coverage. Initially progress is slow, there's a temptation to go back to leaving all the testing to the testers. And after all, aren't we just offloading work from the testers to the developers?
But as the developers cover more and more of the application with the test API, the testers can spend more and more time experimenting with different scenarios, without having to rely so much on the availability of the developers. Eventually the work building the test API pays off, the whole app is covered, and it's now accepted that as part of delivering changes, corresponding work is done to extend the testing API so that the acceptance tests can be completed.
What About BDD Testing Tools, e.g. Gherkin, SpecFlow, etc?
Some of you reading this may think this idea of a test API, and separation between test scenarios and test implementation sounds familiar. The ultimate separation I've come across to date between integration tests and the underlying automation toolset is in the area of BDD, using tools such as SpecFlow and the Gherkin language. In fact, it's my recent experiences with these tools that have crystallised some of the above thoughts.
To date I've only seen BDD used in anger on one-man projects. I've yet to see how the technique flies in a larger team, for example does SpecFlow squeeze QA into too small a slice, or is it right that QA only care about the test specifications, and they can then do the profitable stuff of exploratory testing? Do you expand QA to include working with the project team to create the initial acceptance specifications that everyone else works to? Hopefully there's plenty of material there for a future post!