Category Archives: Thinking

Trying on hats

After having missed out on a couple of EAST gatherings lately, I finally managed to make it to the this month’s meetup this past Thursday (the group’s 11th meetup since its inception, for those who like to keep scores). This meetup was a bit different that past ones, in a good way. Not that the other ones haven’t been good, but it’s fun to mix things up. The plan for the evening was to study, implement and evaluate a Edwards de Bono’s Six Thinking Hats technique in a testing situation. The six thinking hats is basically a tool to help both group discussions and individual thinking by using imagined (or real) hats of different colors to force your thinking in certain directions throughout a meeting or workshop. Another cool thing at this meetup was that there were at least a handful of new faces in the room. We’re contagious, yay!

We started out by watching Julian Harty’s keynote address from STARWEST 2008, “Six Thinking Hats for Software Testers”. In this talk, Julian explains how he had successfully implemented Edward de Bono’s technique when he was at Google and how it helped them getting rid of limiting ideas, poor communication, and pre-set roles and responsibilities in discussions and meetings.

So what can we use these hats for? Julian suggests a few areas in his talk:

  • Improving our working relations, by helping reduce the impact of adversarial relationships and in-fighting.
  • Reviewing artifacts like documents, designs, code, test plans and so on.
  • Designing test cases, where the tool helps us to ask questions from 6 distinct viewpoints.

Julian recommends starting and ending with the the Blue Hat, which is concerned with thinking about the big picture. Then continuing forward with the Yellow Hat, which symbolizes possibilities and optimism. The Red Hat, symbolizing passion and feelings. The White Hat, which calls for the facts and nothing but the facts (data). The Black Hat, the critiquing devil’s advocate hat, which looks out for dangers and risks. And finally, after going through all the other hats to help us understand the problem domain, we move on to the Green Hat, which let’s us get creative, brainstorm and use the power of “PO”.

PO stands for provocative operation and is another one of de Bono’s useful tools that helps us get out of ruts. If you find yourself stuck in a thinking pattern, you have someone throw in a PO, in order to help people get unstuck and think along new lines.

There are five different methods for generating a PO: Reversing, Exaggerating, Distorting, Escaping and Wishful Thinking. All of them encourages you to basically “unsettle your mind”, thereby increasing the chances that you will generate a new idea (a.k.a “movement” in the de Bono-verse). You can get a brief primer here if you’re interested in learning more, though I do recommend going straight for de Bono’s books instead. Now, we didn’t discuss PO much during the meetup, but it reminded me to go back and read up on these techniques afterwards. Would be fun to try out in sprint planning or when breaking down larger test ideas.

So after we’d watched the video through, we proceeded to test a little mobile cloud application that had been developed by a local company here in Linköping. The idea was to try to implement the six hats way of thinking while pair testing, which was a cool idea, but it soon became clear that we needed to tour the application a bit first in order to apply the six hats. Simply going through the six hats while trying to think about a problem domain you know nothing about didn’t really work. Also, bugs galore, so there wasn’t much really need to get creative about test ideas. Still, a good exercise that primed our thinking a bit.

Afterwards we debriefed the experience in the group and I think that most of us felt that this might be a useful tool to put in our toolbox, alongside other heuristics. When doing test planning for an application that you know a bit more about, it will probably be easier to do the six hats thinking up front. With an unknown application, you tend to fall back to using other heuristics and then putting your ideas into one of the six hats categories after the fact, rather than using the hats to come up with ideas.

I also think the six hats would be very useful together with test strategy heuristics like SFDPOT, examining each product element with the help of the hats, to give your thinking extra dimensions. Same principle as you would normally use with CRUSSPIC STMPL (the quality characteristics heuristic) together with SFDPOT. Or why not try all three at the same time?

As usual, a very successful and rewarding EAST meetup. Sitting down with peers in a relaxed environment (outside business hours) can really do wonders to get your mind going in new directions.

For a more in depth look on the original idea of the hats, see Edward de Bono’s books Six Thinking Hats (1985), or Lateral Thinking: A Textbook of Creativity (2009), which describes them pretty well as well if I remember correctly.

Edit: If you want to read less about the hats and more about how the meetup was actually structured (perhaps you want to start your own testing meetups?), head on over to Erik Brickarp’s blog post on this same meetup.

I’m a sucker for analogies

I love analogies. I learn a lot from them and I use them a lot myself to teach others about different things. Sure, even a good analogy is not the same as evidence of something, and if  taken too far, analogies can probably do more harm than good (e.g. “The software industry is a lot like the manufacturing industry, because… <insert far fetched similarity of choice>”). However, I find that the main value of analogies is not that they teach us “truths”, but rather that they help us think about problems from different angles, or help illustrate thinking behind new ideas.

I came across such an analogy this morning in a mail list discussion about regression testing. One participant offered a new way of thinking about the perceived problem of keeping old regression tests updated, in this way: “Pause for a moment and ask… why should maintenance of old tests be happening at all? […] To put it another way, why ask old questions again? We don’t give spelling tests to college students […]”

I like that analogy – spelling tests to college students. If our software has matured past a certain point, then why should we go out of our way to keep checking that same old, unchanged functionality in the same way as we’ve done a hundred times before? Still, the point was not “stop asking old questions”, but rather an encouragement to examine our motivations and think about possible alternatives.

A reply in that same thread made a point that their regression tests were more like blood tests than like spelling tests. The analogy there: Just because a patient “passes” a blood test today, doesn’t mean it’s pointless for the physician to draw blood on the patient’s next visit. Even if the process of drawing blood is the same every time, the physician can choose to screen for a single problem, or multiple problems, based on symptoms or claims made by the patient. Sort of like how a tester can follow the same path through a program twice but vary the data.

So what does this teach us about testing? Again, analogies rarely teach us any hard truths, but they serve as useful stimuli and help us think from new angles. I use them as I use any other heuristic methods. So with this spelling test/blood test analogy in mind, I start to  think about the test ideas I have lined up for the coming few days at work. Are most of them going to be like spelling tests and if so, can I still make a good argument for why those would be the best use of my time? Or are there a few ideas in there that could work like blood tests? If so, what qualifies them as such and can I improve their screening capability even further in some way (e.g. vary the data)?

Like I said earlier, I came across this analogy just this morning, which means I’m probably not really done thinking about it myself yet, but I thought it worth sharing nonetheless. Much like cookies, sometimes a half-baked thought is even better than the real thing. Or at least better than no cookie at all. So here it is. And with that analogy, or maybe with this one below, I bid you a good day.

XKCD: Analogies

Thinking Visually

Today I finally got around to watching Alan Richardson’s STARonline talk “Thinking Visually In Software Testing” that has recently been published on YouTube for easy access (also embedded at the bottom of this post).

I’m always interested in learning new ways of visualizing information and communicate thoughts in effective ways and so Thinking Visually is a topic that’s right up my alley. Alan’s talk is well worth a watch/listen and if you don’t believe me, I took some quick notes while watching it to help you decide if it’s worth your time or not. (Hint: It is.)

Described in one sentence, the talk is about using models and diagrams to aid test planning and communication of the testing effort. It covers an explanation of what Alan means by “thinking visually” but it also describes the opposite of thinking visually and also contains a very amusing part with examples of how to best “trap” your thinking and how to best document your trapped thinking so that your readers will gain no value from reading your documentation. Hilarious. Also, as you listen to Alan’s examples of trapped thinking being presented in your average test plan or report, you will probably realize that you see this kind of documentation quite often.

I do recommend that you listen from the beginning of the talk, but if you want to hear what I’m talking right away, you can skip ahead to about 16:48 for a good laugh. That is, until you also realize that some of this stuff is something you yourself have been doing or maybe are still doing quite often. At least that’s what I realized. Alan suggests that we go through your own documents and read it with a sense of humor, which will help us spot these things more easily, and maybe also help us stop doing them.

But… going back to the beginning (how’s that for structure), one thing that Alan said early on was something that got me thinking about how I approach documentation and debriefings:

“I would rather see your thinking, than see what you think your thinking should look like.”

In other words, the way you are presenting your test strategy, test ideas or test results, should demonstrate that you’re putting more effort into the thought process than you are into the documentation process. So, focus on showing that you are thinking and that you are thinking about the testing task at hand, rather than presenting something that suggests you were focused on thinking: “How can I fill in this template”?

“If you don’t think in the first place. If you don’t have a model about what you’re going to do, your communication will be unclear, and you won’t convince, regardless of how well you fill in the template.”

I personally like to see more test plans focus on showing me that the tester is thinking, rather than focusing on exactly what they are thinking. Why? Well, test plans are usually born before testing starts, at a time when we know the least we’ll ever know about the thing we’re actually going to test. So if I’m one of your stakeholders and you show me a plan that tells me exactly what you’re going to do and that you have it all figured out… then the only thing I know for sure is what your testing will not be like, because no test plan fully escapes first contact with the product unscathed.

But, if you can show me that you are thinking actively, from many different angles, and that your thinking is open to a changing context, then I will feel more assured that you will do a good job once you get your hands on the product. I don’t want testers who can follow directions. I want testers who can think for themselves.

Ok, back to the presentation. Alan shares a few of his principles for how to approach documentation (somewhat paraphrased):

  • How little can you get away with?
  • Make the templates work for you, not the other way around
  • Put important information first. Make what’s important obvious
  • Summarize for the reader
  • Meet the reader’s needs

I’m running a bit long with this post, but it turns out that this was a very quotable talk, so I’ll leave you with a few “sound bites” that I took away from listening to this talk, that might trigger you to go and watch the whole 25 minutes or so of it yourself.

I learned that communication is not what I give, it’s what people take from what I present. So I have to help them take it in the way that I want […] to focus on what’s important.
– – –
When you create a mind map your default relationship is a parent/child decomposition, but there are other relationships in your model and you may need different visual models to draw that out.
– – –
Different tools support different styles of thinking. You get different value when you model your thought process in different tools.
– – –
Don’t think that you can get away without thinking.