Tag Archives: context-driven testing

Peer Conference: How can we convince everyone to prioritize testing?

In the evening the day before DevLin 2018, a small band of merry test specialists in the Linköping area gathered for a short peer conference session on the topic: “How can we convince everyone to prioritize testing?”

“Everyone” in this case primarily means people working in other disciplines than testing, e.g. product owners, managers, programmers, requirements analysts, system engineers, and so on. All testers have probably many times experienced the difficulties involved in getting someone else to understand the importance of correctly weighing the need for thorough testing against the demands for quicker releases, more features and faster time to market. With that background, and armed with a few examples to get us on the right track, this peer conferences was ready to start.

To kick things off, James Bach gave a short presentation of his recent and yet-to-be-published work that him and Michael Bolton have done on their updated version of the Agile Testing Quadrants (from 2014), which both had a couple of new additions, but which was also easier to explain than the previous version.

After the presentation, we initiated a k-card facilitated discussion about a broad collection of thoughts and reactions to that presentation and the general topic for the night. These are some of the threads (definitely not all) that I’m pulling from memory:

What does it mean to do deep testing? Is there an implicit level of coverage associated with claiming that you’re doing deep testing? Opinions differed from everywhere between “deep testing has happened when you can assert that you “know” you’ve found all important/significant bugs in a given area” and “deep testing can occur on a very limited set of variables for a given function or quality aspect in a larger scope of mostly shallow testing”. (These are not meant to be exact quotes. Interpretation and emphasis are mine.)

We also talked about combining multiple testing activities from multiple quadrants, e.g. testing designed to answer the question “Did we build what we think we built” together with deep testing designed to reveal and provide “knowledge of every important bug”. While it is normally the case that these two types of testing activities can be done in parallel quite successfully, we still spent a good while discussing contexts where there two may not be suitable to run in parallel and what to do instead. In a context where a lot of big changes are happening rapidly, or where there is general chaos, deep testing might not be an efficient use of resources at that certain point in time. Testers working in that domain might do well to consciously move into a preparation domain to perform activities that help us test more efficiently: testability advocacy, analysis, specification, test data generation, constructing test environments, etc. This type of movement reminded me of the dynamics of Cynefin and would in my mind be a type of movement that one could make both voluntarily or involuntarily depending on the circumstances surrounding the tester.

Another extremely fascinating discussion was on the concept of Critical distance and its relationship with Social distance. From Bach/Bolton: “Critical Distance refers to the difference between one perspective and another. Testing benefits from diverse perspectives. Testing benefits from diverse perspectives. Shallow testing is tractable at a close critical distance, whereas deeper or naturalistic long‐form testing tends to require or create more distance from the builder’s mindset.”

In other words, you want and need a fair bit of critical distance in order to do deep testing, but in order to work well with others and build rapport with the people who built the thing your testing, you want a close social distance. The problem is that critical distance and social distance go hand in hand. They are more or less bungee chorded to each other, which creates a an interesting trade-off. As your critical distance increases, so does your social distance, and vice versa. Decrease social distance, and you risk decreasing your critical distance. On the other hand, a certain amount of social distance is necessary both to be able to gain information about the thing being built, and to not be seen as a socially inept weirdo who no one listens to anyway. It’s all about finding the sweet-spot. (And there are of course exceptions and things that can be done to increase critical distance without negatively impact social distance in the workplace, though maybe not always easily.)

Getting programmers on board with testing is something that many of us have tried in the past, and have fairly good experience with, and as such it was hardly addressed head on as far as I can remember(?). Pairing, sharing test techniques knowledge and discussing the concept and specifics of testability and its benefits for both disciplines are examples of ways to get programmers more susceptible to “testing talk”.

Finally, we spent some time discussing how to get management to understand the importance of testing. This is sometimes a difficult nut to crack. I myself find that it can be valuable and fruitful to talk to management about various ways to look at quality (e.g. quality criteria) and how much of the risk associated with many quality criteria will never be written down in checkable requirements, but must be discovered through exploration and deep testing. It was also pointed out in the group that some domain specific examples and examples of bugs that have been covered lately in the news can also be a good way to get their attention. And a third way, which is easier said than done, is to achieve high credibility with management, which will make it more likely that they will listen to you when you try to raise awareness of the importance of testing.

Achieving credibility can either be done by doing a good job over time, or by doing an exceptionally thorough and excellent job on a single task that has the potential for high visibility, in which case it’s worth going the extra mile in order to be able to cash in those credibility chips later on.

Like I’ve already stated, there were many more topics covered that for the moment escapes my memory, but all-in-all, this evening was for me an awesome example of how much value can be squeezed out of only a few (~3) hours when a small peer group sit down to discuss big topics with a lot of passion. Good fun and great company too. I will definitely try to help schedule these types of sit-downs more often.

Thank you to all participants for a great evening, and a special thank you to to Agnetha and Erik for co-organizing the evening together with me, and to Rebecca and Morgan for providing the room, and to James Bach for joining us while in town.

Credit for the contents of this post belongs to the contributors of this peer conference:  Johan Jonasson, Morgan Filipsson, Rebecca Källsten, Erik Brickarp, Agnetha Bennstam, Magnus Karlsson, Anders Elm, James Bach & Martin Gladh