What Do Agile Testers Look Like?

I recently spoke with a recruiter. “I don’t understand the QA market anymore. No one is hiring except for agile people. And they want people who are developers. What’s happening to QA?”

Manual testing was never quality assurance; it was testing. And, manual testing is low-value, high cost work, especially when you compare it to automated regression testing. Now, before you assume I mean there is never a time for manual testing, no, I did not say that.

There is a time and place for exploratory testing, which tends to be more manual. There is a time and place for opportunistic testing. But the kind of system-level testing that says “Does this feature meet its acceptance criteria, and does the rest of the system still work and can we know that within a two-week timebox?” is not primarily manual testing. That’s what agile projects need. (If you’re using an incremental or iterative lifecycle, you need to know this too, just not in a two-week timebox.)

That means that agile testers have different technical skills and provide  different deliverables (and value to the project) than the kinds of testers this recruiter is accustomed to.

These testers have different functional skills, solution-space domain expertise, and tool capabilities than strictly manual testers. See Four Dimensions of Technical Skill for more information. In agile teams, some of the testers don’t look much different from developers, except that their code doesn’t release. Some of the testers might be better at sitting with a product owner and saying “What does done really look like for this feature?”

Whatever their skills, these testers are multi-dimensional people, capable of much more than manual testing. They are not second class citizens, but are valuable members of the product development team.

For you recruiters, think about what value the testers add to a project team. If you’re a tester, what skills do you need to acquire to provide more value to your project team? And, managers, does this make sense to you? Does knowing about the state of the product often provide you more value than having to wait days or weeks until you can manually finish a test run make more sense?

6 Replies to “What Do Agile Testers Look Like?”

  1. Happy to read this post! I would add, though, that testers on agile teams are different than programmers, in that we often have a different point of view, ask different questions, and are likely to be better at exploratory testing.

    Our code may not get ‘released’, but our team’s tests are in the source code control along with the production code.

  2. Johanna, your post couldn’t have come at a better time for me. Currently I’m helping two different agile teams try to mold their QA process. One team is actively looking to hire an Agile QA tester to work with their developers as they complete stories/features. The other is trying to encourage their existing QA team (who are accustom to a waterfall methodology) to work within the confines and need of a two week iteration as well as having a member of the QA team be a part of the engineering team.

    It’s the latter team who I’m finding it harder to help. It’s due to the organizational issue surrounding a separate QA team and the incentives they are given for their performance that doesn’t necessarily help an agile engineering team. I’m trying to navigate and negotiate with members of management through this. I’ll let you know what I’m able to accomplish.

  3. I manage a test team at an agile ISV. Every tester in my team writes, tests and debugs automated tests in Java. They each perform the sysadmin and DBA tasks necessary to create and manage their own test environments. They have the build engineering skills to regularly check out, compile and build software to test from source as well as create and configure continuous integration builds to execute automated tests. They extend existing test tools and frameworks and develop new ones when required. They perform business analysis functions, influencing and correcting user stories and specifications. None of these things makes their testing better. Non-agile testers can (and do) do all of these things as well.

    What does make the testing my team does better are their detective skills which they use to conduct thorough investigations of software so that detrimental behaviour is discovered and resolved quickly. This requires intuition, curiosity, suspicion, empathy, objectivity and subjectivity. Skills and traits that should be discussed when discussing what makes a good tester in articles like this, but unfortunately rarely are. These can’t be automated in any meaningful way. To obtain value from them requires the undertaking of manual exploratory testing (I include manual here only because you stated exploratory testing “tends to be manual” which suggests it could also be automated. I don’t believe it can be, so some examples would be useful). Again exploratory testing is not exclusive to agile testers either.

    Given none of the skills above are exclusive to agile testers and the most valuable activity my agile test team performs is a type of manual testing, how should I make sense of your article that appears to suggest that automated testing is better than manual testing and that agile testers have different skills to non-agile testers?

    It would be helpful if you could clarify what you mean when you say manual, automated, exploratory and opportunistic testing. When and where the right time and place for each of these are. Explain which parts of system-level testing that says “Does this feature meet its acceptance criteria, and does the rest of the system still work and can we know that within a two-week timebox?” should be automated and why. I think doing so would yield a much better insight into what agile testers really look like.

  4. Interesting post. I have been struggling with these concepts for some time and it has clarified my thinking.

    I would be interested in a good discussion of the costs and benefits of manual and automated testing. It seems to me from my own experience and reading Andrew’s post above that automated testing is also a development activity, and like all development activities it has its own costs, risks and issues to contend with. for example, who ensures the testers scripts work correctly and how?

    I would say that generally automated testing is useful for regression testing and other repeatable testing; it is almost the only way to effectively test for scalability and other performance criteria, but it’s of limited use when doing one off tests (eg the exploratory testing mentioned above).

    Where the tests themselves have to change frequently and where the tests are quick and simple to run manually, it may be the costs of automated tests outweigh the benefits.

Leave a Reply