My colleague Sarah Taraporewalla posted an interesting text on acceptance testing. She doesn’t believe in this technique. I’ve been thinking of acceptance tests for some months now and think that she has a valid –a bit too radical but still valid- point.
My main problems with acceptance tests are:
- They are temporary. The acceptance criteria for one Story won’t necessarily be true for more than a few iterations, days or even hours.
- They add time to the build. Adding redundant test cases to each story just to explicitly fulfill its acceptance criteria often cause builds to take more than acceptable.
I find acceptance tests useful
And then the waste starts to be a real problem. In an iterative and incremental project is very common to invalidate dozens of old acceptance criteria just by playing a single story. The developer playing that card must search the code base and decide what should be kept, what should be modified and what makes no sense anymore. That takes time.
And the build gets longer and longer. Given the fact that we test per-story, not per-feature or scenario, the build time skyrockets. I’ve seem in recent projects builds increasing by orders of magnitude only due to acceptance tests. Every single story finished adds more time to the build.
So, how can we still have the benefits of automated acceptance testing and get rid of the multiple problems caused by those?
I asked this question to a lot of smart folks and no one could really give an answer. I’m trying an approach that’s new for me and I can’t remember reading about something similar anywhere. This technique aims in fighting the two main causes for crappy acceptance tests.
The first and probably worst cause is the fact that developers are not used to write tests. I don’t say this as an excuse; I do think that professional software developers should know about boundaries, branches and other concepts. The problem is that as developers don’t know much about testing they usually create inefficient tests. In a recent metrics gathering I found that about 20% of the time taken by a build process in a given project was spent testing exactly the same scenario over and over again. To avoid spending effort in inefficient tests we decided that the owners of acceptance tests are the QA people.
The other cause is the fact that acceptance criteria are temporary. To avoid wasting time maintaining something that may not be valid in one hour time we decided that the acceptance test should be merged to the automated system tests after the story is done.
Our current process has, as Dahlia explained, something I like to call user story sandwich. There are some issues with this approach but it matches seamlessly with our acceptance test strategy. It starts with a kick-off with business analysts, developers, QAs and other stakeholders. This takes 15 minutes in average and in this kick-off we revisit the acceptance criteria.
Immediately after this the devs and QAs draft acceptance test cases. Those test cases will guide developers during their work and we assume a story is dev-complete when they pass. The tests written at this time are very similar to what we were using before except by the fact that now they’re generally written by a tester with the developer’s support.
When the story reaches dev-complete we do a walkthrough, a little showcase where devs present the feature working to business and testers. The main role for testers at this stage is to verify if the acceptance tests pass as expected. As devs and QAs are always communicating during the development it is very unlikely that a story will be rejected at this stage.
After the walkthrough is finished the acceptance test is given back to the QA people. To get finally to the DONE column in the story wall a card has to pass through testing and the first step they perform is to convert acceptance tests to system tests. System tests are generally created based on features instead of stories and they are often more concise and efficient than per-story testing. They are run in the build process, either in the normal dev build or as the second stage in build pipeline.
This approach is working quite well so far. Unfortunately we decided to go for this approach in the end of the project so I couldn’t see if it works for the full project lifecycle.
There are some tricky bits in that aproach.
The first and obvious thing is that your testers should not only automate their tests but also use a tool developers can use, understand and extend. This may be a problem for some but a lot of projects are already adopting tools like Twist, Selenium, Concordion and the like for acceptance tests. Even lower level API tests generally can be easily written by testers using a Domain-Specific Language.
Other attention point is that you should not forbid developers to write higher-level tests. In our team discussions it was pointed a lot of times that, if the process works, the developer should not need to write any high-level tests besides those developed together with the tester. Although I agree with this in theory, for me this is like saying that developers can’t run the website they’re building as testers will do that anyway. So far we decided to have a special source folder where developers can write their tests. In this folder currently we have several tests covering scenarios that testers didn’t consider relevant but developers still want to verify.
So far we didn’t see any improvement in the build time. The main impediment is that we still have heaps of old acceptance tests, and obviously business people won’t spend time converting those. We try to convert then to the new style when they need some maintenance but I don’t believe we will be able to convert even 50% until the project is finished. We are not making the build longer, what is good, but we are not reducing its duration.
An improvement we had was in the time cards stay in the “test” column. As the acceptance tests were written together with testers there are just a few more tests they add after converting them to system tests. Creating acceptance tests just after the kick-off is also helping to clarify some obscure topics before any code is written.
Will we continue doing that? Would I recommend that approach? It is too early to say. It is working so far but I can’t say that this is the ideal solution. I’m looking for ideas.