Category Archives: EAST

Peer Conference: How can we convince everyone to prioritize testing?

In the evening the day before DevLin 2018, a small band of merry test specialists in the Linköping area gathered for a short peer conference session on the topic: “How can we convince everyone to prioritize testing?”

“Everyone” in this case primarily means people working in other disciplines than testing, e.g. product owners, managers, programmers, requirements analysts, system engineers, and so on. All testers have probably many times experienced the difficulties involved in getting someone else to understand the importance of correctly weighing the need for thorough testing against the demands for quicker releases, more features and faster time to market. With that background, and armed with a few examples to get us on the right track, this peer conferences was ready to start.

To kick things off, James Bach gave a short presentation of his recent and yet-to-be-published work that him and Michael Bolton have done on their updated version of the Agile Testing Quadrants (from 2014), which both had a couple of new additions, but which was also easier to explain than the previous version.

After the presentation, we initiated a k-card facilitated discussion about a broad collection of thoughts and reactions to that presentation and the general topic for the night. These are some of the threads (definitely not all) that I’m pulling from memory:

What does it mean to do deep testing? Is there an implicit level of coverage associated with claiming that you’re doing deep testing? Opinions differed from everywhere between “deep testing has happened when you can assert that you “know” you’ve found all important/significant bugs in a given area” and “deep testing can occur on a very limited set of variables for a given function or quality aspect in a larger scope of mostly shallow testing”. (These are not meant to be exact quotes. Interpretation and emphasis are mine.)

We also talked about combining multiple testing activities from multiple quadrants, e.g. testing designed to answer the question “Did we build what we think we built” together with deep testing designed to reveal and provide “knowledge of every important bug”. While it is normally the case that these two types of testing activities can be done in parallel quite successfully, we still spent a good while discussing contexts where there two may not be suitable to run in parallel and what to do instead. In a context where a lot of big changes are happening rapidly, or where there is general chaos, deep testing might not be an efficient use of resources at that certain point in time. Testers working in that domain might do well to consciously move into a preparation domain to perform activities that help us test more efficiently: testability advocacy, analysis, specification, test data generation, constructing test environments, etc. This type of movement reminded me of the dynamics of Cynefin and would in my mind be a type of movement that one could make both voluntarily or involuntarily depending on the circumstances surrounding the tester.

Another extremely fascinating discussion was on the concept of Critical distance and its relationship with Social distance. From Bach/Bolton: “Critical Distance refers to the difference between one perspective and another. Testing benefits from diverse perspectives. Testing benefits from diverse perspectives. Shallow testing is tractable at a close critical distance, whereas deeper or naturalistic long‐form testing tends to require or create more distance from the builder’s mindset.”

In other words, you want and need a fair bit of critical distance in order to do deep testing, but in order to work well with others and build rapport with the people who built the thing your testing, you want a close social distance. The problem is that critical distance and social distance go hand in hand. They are more or less bungee chorded to each other, which creates a an interesting trade-off. As your critical distance increases, so does your social distance, and vice versa. Decrease social distance, and you risk decreasing your critical distance. On the other hand, a certain amount of social distance is necessary both to be able to gain information about the thing being built, and to not be seen as a socially inept weirdo who no one listens to anyway. It’s all about finding the sweet-spot. (And there are of course exceptions and things that can be done to increase critical distance without negatively impact social distance in the workplace, though maybe not always easily.)

Getting programmers on board with testing is something that many of us have tried in the past, and have fairly good experience with, and as such it was hardly addressed head on as far as I can remember(?). Pairing, sharing test techniques knowledge and discussing the concept and specifics of testability and its benefits for both disciplines are examples of ways to get programmers more susceptible to “testing talk”.

Finally, we spent some time discussing how to get management to understand the importance of testing. This is sometimes a difficult nut to crack. I myself find that it can be valuable and fruitful to talk to management about various ways to look at quality (e.g. quality criteria) and how much of the risk associated with many quality criteria will never be written down in checkable requirements, but must be discovered through exploration and deep testing. It was also pointed out in the group that some domain specific examples and examples of bugs that have been covered lately in the news can also be a good way to get their attention. And a third way, which is easier said than done, is to achieve high credibility with management, which will make it more likely that they will listen to you when you try to raise awareness of the importance of testing.

Achieving credibility can either be done by doing a good job over time, or by doing an exceptionally thorough and excellent job on a single task that has the potential for high visibility, in which case it’s worth going the extra mile in order to be able to cash in those credibility chips later on.

Like I’ve already stated, there were many more topics covered that for the moment escapes my memory, but all-in-all, this evening was for me an awesome example of how much value can be squeezed out of only a few (~3) hours when a small peer group sit down to discuss big topics with a lot of passion. Good fun and great company too. I will definitely try to help schedule these types of sit-downs more often.

Thank you to all participants for a great evening, and a special thank you to to Agnetha and Erik for co-organizing the evening together with me, and to Rebecca and Morgan for providing the room, and to James Bach for joining us while in town.

Credit for the contents of this post belongs to the contributors of this peer conference:  Johan Jonasson, Morgan Filipsson, Rebecca Källsten, Erik Brickarp, Agnetha Bennstam, Magnus Karlsson, Anders Elm, James Bach & Martin Gladh

Trying on hats

After having missed out on a couple of EAST gatherings lately, I finally managed to make it to the this month’s meetup this past Thursday (the group’s 11th meetup since its inception, for those who like to keep scores). This meetup was a bit different that past ones, in a good way. Not that the other ones haven’t been good, but it’s fun to mix things up. The plan for the evening was to study, implement and evaluate a Edwards de Bono’s Six Thinking Hats technique in a testing situation. The six thinking hats is basically a tool to help both group discussions and individual thinking by using imagined (or real) hats of different colors to force your thinking in certain directions throughout a meeting or workshop. Another cool thing at this meetup was that there were at least a handful of new faces in the room. We’re contagious, yay!

We started out by watching Julian Harty’s keynote address from STARWEST 2008, “Six Thinking Hats for Software Testers”. In this talk, Julian explains how he had successfully implemented Edward de Bono’s technique when he was at Google and how it helped them getting rid of limiting ideas, poor communication, and pre-set roles and responsibilities in discussions and meetings.

So what can we use these hats for? Julian suggests a few areas in his talk:

  • Improving our working relations, by helping reduce the impact of adversarial relationships and in-fighting.
  • Reviewing artifacts like documents, designs, code, test plans and so on.
  • Designing test cases, where the tool helps us to ask questions from 6 distinct viewpoints.

Julian recommends starting and ending with the the Blue Hat, which is concerned with thinking about the big picture. Then continuing forward with the Yellow Hat, which symbolizes possibilities and optimism. The Red Hat, symbolizing passion and feelings. The White Hat, which calls for the facts and nothing but the facts (data). The Black Hat, the critiquing devil’s advocate hat, which looks out for dangers and risks. And finally, after going through all the other hats to help us understand the problem domain, we move on to the Green Hat, which let’s us get creative, brainstorm and use the power of “PO”.

PO stands for provocative operation and is another one of de Bono’s useful tools that helps us get out of ruts. If you find yourself stuck in a thinking pattern, you have someone throw in a PO, in order to help people get unstuck and think along new lines.

There are five different methods for generating a PO: Reversing, Exaggerating, Distorting, Escaping and Wishful Thinking. All of them encourages you to basically “unsettle your mind”, thereby increasing the chances that you will generate a new idea (a.k.a “movement” in the de Bono-verse). You can get a brief primer here if you’re interested in learning more, though I do recommend going straight for de Bono’s books instead. Now, we didn’t discuss PO much during the meetup, but it reminded me to go back and read up on these techniques afterwards. Would be fun to try out in sprint planning or when breaking down larger test ideas.

So after we’d watched the video through, we proceeded to test a little mobile cloud application that had been developed by a local company here in Linköping. The idea was to try to implement the six hats way of thinking while pair testing, which was a cool idea, but it soon became clear that we needed to tour the application a bit first in order to apply the six hats. Simply going through the six hats while trying to think about a problem domain you know nothing about didn’t really work. Also, bugs galore, so there wasn’t much really need to get creative about test ideas. Still, a good exercise that primed our thinking a bit.

Afterwards we debriefed the experience in the group and I think that most of us felt that this might be a useful tool to put in our toolbox, alongside other heuristics. When doing test planning for an application that you know a bit more about, it will probably be easier to do the six hats thinking up front. With an unknown application, you tend to fall back to using other heuristics and then putting your ideas into one of the six hats categories after the fact, rather than using the hats to come up with ideas.

I also think the six hats would be very useful together with test strategy heuristics like SFDPOT, examining each product element with the help of the hats, to give your thinking extra dimensions. Same principle as you would normally use with CRUSSPIC STMPL (the quality characteristics heuristic) together with SFDPOT. Or why not try all three at the same time?

As usual, a very successful and rewarding EAST meetup. Sitting down with peers in a relaxed environment (outside business hours) can really do wonders to get your mind going in new directions.

For a more in depth look on the original idea of the hats, see Edward de Bono’s books Six Thinking Hats (1985), or Lateral Thinking: A Textbook of Creativity (2009), which describes them pretty well as well if I remember correctly.

Edit: If you want to read less about the hats and more about how the meetup was actually structured (perhaps you want to start your own testing meetups?), head on over to Erik Brickarp’s blog post on this same meetup.

EAST meetup #7

Last night, EAST (the local testing community in Linköping) had its 7th “official” meetup (not counting summer pub crawls and the improvised restaurant meetup earlier this fall). A whopping 15 people from opted to prolong their workday by a few hours and gather to talk about testing in inside Ericsson’s facilities in Mjärdevi (hosting this time, thanks to Erik Brickarp). Here’s a short account of what went down.

First presentation of the night was me talking about the past summer’s CAST conference and my experiences from that. The main point of the presentation was to give people who didn’t know about CAST before an idea of what makes CAST different from “other conferences” and why it might be worth considering attending from a professional development standpoint. CAST is the conference of the Association for Software Testing. A non-profit organization with a community made up lots of cool people and thinking testers. That alone usually makes the conference worth attending. But, naturally I’m a bit biased.

If you want to know more about CAST, you can find some general information on the AST web and CAST 2012 in particular has been blogged about by several people, including myself.

Second presentation was from Victoria Jonsson and Jakob Bernhard who gave their experience report from the course “The Whole Team Approach to Agile Testing” with Janet Gregory that they had attended a couple of months ago in Gothenburg.

There were a couple of broad topics covered. All had a hint of the agile testing school to them, but from the presentation and discussions that followed, I got the impression that the “rules” had been delivered as good rather than best practices, with a refreshingly familiar touch of “it depends”. A couple of the main topics (as I understood them) were:

  • Test automation is mandatory for agile development
    • Gives more time for testers to do deeper manual testing and focus on what they do best (explore).
    • Having releases often is not possible without an automated regression test suite.
    • Think of automated tests as living documentation.
  • Acceptance Testing could/should drive development
    • Helps formulating the “why”.
    • [Comment from the room]: Through discussion, it also helps with clarifying what we mean by e.g. “log in” in a requirement like “User should be able to log in”.
  • Push tests “lower” and “earlier”
    • Aim to support the development instead of breaking the product [at least early on, was my interpretation].
    • [Discussion in the room]: This doesn’t mean that critical thinking has to be turned off while supporting the team. Instead of breaking the product, transfer the critical thinking elsewhere e.g. the requirements/user stories and analyze critically, asking “what if” questions.
    • Unit tests should take care of task level testing, Acceptance tests handles story level testing and GUI-tests should live on a feature level. [Personally, and that was also the reaction of some people in the room, this sounds a bit simplified. Might not be meant to be taken literally.]

There was also a discussion about test driven development and some suggestions of good practices came up, like for instance how testers on agile teams should start a sprint by discussing test ideas with the programmer(s), outlining the initial test plan for them. That way, the programmer(s) can use those ideas, together with their own unit tests, as checks to drive their design and potentially prevent both low and high level bugs in the process. In effect, this might also help the tester receive “working software” that is able to withstand more sapient exploratory testing and the discussion process itself also helps to remove confusion and assumptions surrounding the requirements that might differ between team members. Yep, communication is good.

All in all, a very pleasant meetup. If you’re tester working in the region (or if you’re willing to travel) and want to join for the next meetup, drop me an e-mail or comment here on the blog and I’ll provide information and give you a heads up when the next date is scheduled.

EAST meetup #6

About 6 months ago, a few testing peers in Linköping, Sweden, started  a local competence network group that we named “EAST”. The name itself doesn’t mean anything, but suggests that we reside in the south east parts of Sweden and that we’re welcoming people from all around these parts, not just our little town.

The idea was to get people from different organizations and companies together to talk about test and to help each other learn more about testing by sharing knowledge and experiences with each other.

So this past Monday I attended the 6th meetup with the group. We’ve gone through a couple of meetings in the early stages where we got to know each other and talked about what we all do at our respective companies in terms of testing. By the 3rd or 4th meetup, we were starting to have more prepared themes for each meetup and this time we actually had two separate themes prepared.

First, we got to listen to Hanna Germundsson who presented a software testing thesis she’s working on, geared towards test processes and minimizing testing related risks. We got to ask questions and also had time for some open conversations in the group about the different questions Hanna presented. There will be follow ups to this one for sure. Haven’t seen that many software testing theses before. Very cool.

For the second part of the meetup, a couple of people talked about their experiences from the Let’s Test 2012 conference. While it’s of course fantastic to listen to people talking about this conference that I helped organize as an event in general, it was even more cool to listen to them describing specific take-aways and learnings from the different tutorials and sessions they attended. Check out the Let’s Test 2012 archives page for slides and other material related to the conference.

EAST will take a break over the summer, but we’ll be back in force this fall. If you live nearby and want to participate then I suggest you join the EAST LinkedIn group to stay in touch with news and announcements. Or follow @test_EAST on Twitter. Or both.