Peer conference awesomeness at SWETish

I spent this past weekend at a peer conference. This one marks my 5th peer conference, but it’s been a long while since I was at my 4th. In fact, it’s been four years since SWET4. Peer conferences are awesome though, as they let the participants really go deep and have thorough and meaningful conversations over one or more days in a small enough group that makes such discussions possible.

Ever since I moved to Linköping a few years ago, I had promised to do my part in organizing a local peer conference for the testing community in and around Linköping and this weekend we finally got that project off the ground! We decided to call this particular conference SWETish instead of another being another increment of the regular SWET. The reasons being that we wanted to keep participation first and foremost from within the local communities and the regular SWET conferences invite people from all over country, and also because we wanted to keep the theme broad and inclusive whereas SWET has already been through a fair number of iterations (SWET7 being the latest one) and we sort of didn’t want to break their set of more specific topics in exchange for a general one that has already been covered. Maybe next time we’ll put on “SWET8” though, if nobody in the Meetups.com group beats us to it (hint hint, wink wink, nudge nudge).

So, sort of but not quite a SWET conference i.e. SWETish (with its eponymous Twitter hashtag #SWETish).

The whole thing took place at the Vadstena Abbey Hotel, which is made up of a beautiful set of buildings, some dating back to the 12th, 13th and 14th century. From an organizer standpoint, I can certainly recommend this venue. Nice staff, cozy environment and above average food. And a nice historic atmosphere too of course. (Click the link in the tweet below for a couple of snapshots.)

When I sent out the initial invitations to this peer conference, I had my mind set on getting a total of 15 participants, as that seemed to be a good number of people to ensure that all speakers get plentiful of questions and that there would be a good mix of experiences and viewpoints, while at the same time not being too many people so that everybody gets to participate thoroughly and nobody is forced to sit quiet for too long. However, because a few people who had initially signed up couldn’t make it there in the end, we turned into a group of “only” 10 people. Turns out that’s an excellent number! Most if not all of us there agreed that the low number of participants helped create an environment where everybody got relaxed with each other really quickly which in turn helped when discussions and questions got more critical or pointed, without destroying the mood or productivity of those conversations.

Another pleasant surprise was that we only got through (almost) three presentations + Open Season (facilitated Q&A) during the conference (1,5 days). If memory serves, the average at my past peer conferences is four and sometimes we even start a fifth presentation and Q&A before time runs out. What I liked about us only getting through three is that that is a testament to how talkative and inquisitive the group was, even though 5 out of 10 participants were at their first ever peer conference! I facilitated the first presentation myself and so I can tell you that in that session alone we had 11 unique discussion threads (green cards) and 48 follow-up questions (yellow cards), plus quite a few legit red cards. So for those of you familiar with the k-cards facilitation system, you can tell that this wasn’t a quiet group who only wanted to listen to others speak. Which is great, because that’s the very thing that makes peer conferences so fantastically rewarding.


Apart from the facilitated LAWST-style sessions, we also spent 1 hour on Lightning Talks, to make sure that everyone got to have a few minutes of “stage time” to present something of their own choosing.

The evening was spent chatting around the dinner table, in the SPA and in smaller groups throughout the venue until well past midnight. And even though we’d spent a full day talking about testing, most of the conversations were still about testing! How awesome is that?

If you want to read more about what was actually said during the conference, I suggest you check out the Twitter hashtag feed, or read Erik Brickarp’s report that goes more into the content side of things. This blog post is/was more about evangelizing about the concept itself and provide some reflections from an organizer perspective. Maybe I should have mentioned that at the start? Oops.

A peer conference is made possible by the active participation of each and every member of the conference, and as such, credit for all resulting material, including this blog post, goes to the entire group. Namely, and in alphabetical order:

  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson

Thank you to all the participants, including the few of you who wanted to be there but couldn’t for reasons outside of your control. Next time! And thank you to my partners in crime in the organizing committee: Erik Brickarp, Anna Elmsjö and Björn Kinell.

There! You’ve now reached the end of my triennial blog post. See you in another three years! Actually, hopefully I’ll see you much sooner. The powerful dip in my blogging frequency has partly been due to the continuous deployment of new family members in recent years, which has forced me to cut back on more than one extracurricular activity.

Post below in the comments section if you have comments or questions about peer conferences, or want some help organizing one. I’d be happy to point you in the right direction!

Solving the real problem

This is a continuation of my post from back in August on the ISO/IEC/IEEE 29119 software testing standard in which I argued that the upcoming standard is a rather bad idea. This post is more about what I think we should do instead, to solve the problem the new ISO standard is failing to solve.

Another comment to the article I mentioned in my previous post asked whether or not it would be feasible to arrive at a better software testing standard through an open participation wiki page where a democratic process would be applied, and in the end get a consensus and gain widespread acceptance? My reply in short was: No, I don’t think that’s possible. Open and democratic processes are nice, but they don’t guarantee consensus. In fact, they rarely lead down that path.

But I’m not just being bleak and pessimistic here. There are other ways of achieving wide spread acceptance of ideas while also increasing our common testing toolbox!

My take on it is that we should instead (continue to) do the opposite of committee wrangling. That is, don’t worry about consensus up front. Just keep on noticing good practices that you come across yourself and share them. Codify them, create frameworks and talk about them with other testers, at conferences, on blogs, in white papers. Describe the problem and describe how it was solved, and then leave it in the hands of the global testing community to decide what to do with your ideas.

Ideas that are good will get adopted and will continue to evolve. In some cases they’ll become so universally applicable that they’ll start to “feel” almost like standards, at least for solving very specific problems. Nothing wrong with that. If it works, it works. As long as it’s still ok to question the practice and to leave bits out or add bits to it when the context requires it, and when it makes sense. And with ideas like the ones I’m talking about, experimentation is always ok, and even encouraged.

Ideas that turn out to not work, or that only work in narrow circumstances, won’t get picked up by the community at large. But they can still be a valid solution in certain niche situations. Keep on sharing those ideas too! Context matters and one day your context might require you to implement that weird idea you heard about when attending your last conference that you thought sounded ridiculous or that you’d never bother trying out.

What I don’t think works though, is to design a catch-all solution for every testing problems at once, from scratch, and by committee. Which is why I ranted about the ISO/IEC/IEEE 29119 in my previous post.

As an illustration, take a relatively small framework, like the session-based test management framework in its original form. Imagine what would have become of the session-based testing framework if its originators had decided to call together 20 people to come up with a consensus solution for (e.g.) adding more traceability to exploratory testing. My guess is that it would have been dead in the water before they were even able to agree on what traceability means. What happened in reality was that a basic framework was created by James and Jon Bach to solve a real-world problem and it was then presented at STARWest in 2000 and published as articles in different software engineering magazines. It gained popularity among practitioners and is today a widely known approach that others have adopted and modified to fit their contexts where needed. Peer reviewing at its best.

Session-based Test Management, or how to “manage testing based on sessions” as Carsten Feilberg (@Carsten_F) aptly described it once, has become one of several “standard-like” ways of approaching certain exploratory testing problems. The framework doesn’t have an authoritative organization controlling it though. If you want to measure something other than the ratio of time spent in setup, testing and bug reporting for instance, you can. Heck, you can decide to add or remove whatever you want if you think it will add value. Nobody’s going to stop you from trying. If you do, and if it turns out to add value, your peers will applaud you for adding to the craft rather than bashing you for going against the session-based testing “standard”. And if it doesn’t work, we’ll still applaud you for trying (and for showing us what not to do).

And therein lies my point I believe. I’m all for trying to gather our combined testing knowledge. Do it, and share it! I just don’t think it should be codified by a committee and I don’t believe it’s productive to have authoritative organizations trying to control how to apply that knowledge, or deciding what gets put in and what gets left out, regardless of how well-meaning that organization may have started out. It’s impractical and it slows down progress.

Martin Jansson (@martin_jansson) suggests a somewhat extreme, but interesting comparison here (in Swedish), which is to compare how some of us are fighting against standards in the testing world, with the difficulties Galileo went through when he tried to get the concept of heliocentrism accepted into the “standard” world view of the committee known as the Catholic Church back in the 17th century. Out situation is definitely better than that of Galileo, but still, there might be one or two amusing similarities to contemplate there.

As testers, semi-dogmatic organizations like the ISTQB are already out there, using different schemes and big dog tactics to try to dictate how testing should be done or how we should think. I believe we can do without adding more ammunition like a new ISO standard to their arsenal.

New additions to our common testing toolbox are added everyday by serious and passionate test practitioners. Anybody who calls themselves a professional tester should be expected to be able to pick the best possible tools for their context from that toolbox instead of looking for silver bullets by adopting an all-encompassing solution designed and controlled by a committee.

I think we need a greater number of responsible, creative individuals who are humble enough to draw on relevant experiences from other people, but who are also brave enough to take the risk of thinking for themselves and make their own decisions about what works or not for their particular problems. And then, after the project’s done, share their experiences with their peers. That is, I believe, how we solve the real problem of organizations and teams wanting to learn more about how to organize their testing.

Finally, and although I mentioned it in my EuroSTAR post just a few days ago, I think it’s worth once again to point you to James Christie’s take on testing standards. He hits the nail right on the head in my opinion, and expands on many of my own concerns in a very nice way.

P.S. If you’re in an idea sharing mood. The call for proposals for Let’s Test Oz 2014 is open until January 15th 2014. As an added bonus, you might get a good excuse to visit Sydney! Go for it!

Report from EuroSTAR 2013

I had the opportunity to speak at EuroSTAR this year, which made the decision to go a bit easier than it normally is. After all, EuroSTAR is a pretty pricey party to attend compared to many other conferences, such as e.g. Øredev which ran almost in parallel with EuroSTAR this year.

Anyway, this is a brief report from the conference with some of my personal take-aways, impressions and opinions about the whole thing.

People

First of all, to me, conferences is about the people you meet there. Sure, it’s good if there’s an engaging program and properly engaged speakers, but my main take-aways are usually from the hallway hangouts or late night discussions with whoever happens to be up for a chat. This year, I think the social aspect at EuroSTAR was great. I’ve been to EuroSTAR twice before, in 2008 and 2009, but this was the first one where I didn’t think that the size of the conference got in the way of meeting new and old friends. It also made a bit proud to see that the actual discussions and open seasons seemed pretty much dominated by the people in my community this year. This I think has to do with the way we normally interact with each other, and not because of any bias in the program. On the contrary, I think the program committee had put together a very well-balanced program with a lot of different view and testing philosophies being represented.

Tutorials

The first day and a half at EuroSTAR were devoted to tutorials. I rarely attend tutorials, unless I know they will be highly experiential, but this year I opted for one with relevance to my current testing field, medical software, namely “Questioning Auditors Questioning Testing, Or How To Win Friends And Influence Auditors” with James Christie. My main take-aways were not about how to relate to auditors though, but rather how to think about risk. James pointed out that a lot of times, we use variations of this traditional model to assess risk:

Risk Matrix

The problem with that model is that it scores high impact/low probability risks and low impact/high probability risks the same. Sure, if something is likely to happen we’d probably want to take care of that risk, even if it has only “low” impact. But is that really as important as fixing something that would be catastrophic but has only a small risk probability of happening? Sometimes yes, sometimes no, right? Either way, the model is too simplistic. The problem lies in our (in)ability to perceive and assess risk. Something I think is illustrated quite nicely in the following table.


O’Riordan, T, and Cox, P. 2001. Science, Risk, Uncertainty and Precaution. University of Cambridge.

This is an area I feel I want to dig deeper into. If you have any tips on reading material, please share in the comments.

By the way, James Christie also has a blog that I’ve started reading myself quite recently. His latest blog post is a real nugget for sure: Testing standards? Can we do better?

Keynotes & Sessions

New for this year of EuroSTAR is that the conference chair (Michael Bolton) had pushed for the use of K-cards and facilitated discussions after each talk and keynote. 30 minutes talks, 15 minutes of open season Q&A. Nice! I think that’s a very important improvement for EuroSTAR (though full hour slots would be even better). I mean, if you’re not given the opportunity to challenge a speaker on what he/she is saying, then what’s the point? Argument is a very important tool if we want to move our field forward, and it’s so rare that we in the global testing community get to argue face to face. We need facilitated discussions at every conference, not just a few. I’m glad to see that EuroSTAR is adopting what started at the LAWST peer workshops, and I do hope they stick with it!

All in all, I think the best sessions out of those I attended were:

Laurent Bossavit, who made a strong case against accepting unfounded claims. He did for instance bring up the age old “truth” about how fixing a bug becomes exponentially more expensive as it escapes from the requirements phase into the design phase (and so on) of a project. Turns out, the evidence for that truth is fairly poor, and only applies to certain types of bugs.

Keith Klain, who talked about overcoming organizational biases. His 5 point to follow when attempting to change company culture: 1. Determine your value system. 2. Define principles underpinned by your values. 3. Create objectives aligned to your business. 4. Be continually self-reflective. 5. Do not accept mediocrity. Changing culture is hard, but you might want (or need) to do it anyway. If you do, keep in mind point number 6: Manage your own expectations.

Ian Rowland, who gave a very entertaining talk about the power of IT, “Impossible Thinking”. Seemingly similar to lateral thinking (Edward DeBono), Impossible Thinking challenges you to not stop thinking about something just because it appears impossible, but rather move past that limitation and think about how the impossible might become possible. The thinking style can also be used to provoke creative thinking or new solutions, like how thinking about a phone that can only call 5 different phone numbers (a ridiculous idea at first glance) provoked the creation of a mobile subscription plan that let you call 5 friends free of charge. An idea that allegedly boosted sales for that particular carrier in a way that left competitors playing catch-up for months.

Rob Lambert, gave an experience report where he described in detail his company’s journey moving from releasing only a couple of times per year, down to releasing once per week. It was a very compelling story, but unfortunately I find myself currently working in a very different context. True experience reports are always a treat though.

Then of course I had my own presentation: Test Strategy – Why Should You Care?, where I tried to expand a bit on four main points: 1. Why most strategies I’ve seen are terrible and not helpful. 2. A model for thinking about strategies in a way that they might become helpful. 3. Characteristics of good strategy. 4. Arguments why you should care about good strategy. All in all, apart from maybe trying to pack too much into 30 minutes, I think it went ok. The room was packed too, which was nice.

I did sample quite a number of other sessions too, but it’s difficult to sum them all up with any sort of brevity so I won’t even try. Instead, I’ll provide a few quote from the talks I found the most rewarding:

“If it hurts, keep doing it.” – Rob Lambert (Learning/change heuristic)

Condense all the risks of the corporation into a single metric.” – Rick Buy, Enron (anti-pattern)

“Reality isn’t binary. We don’t know everything in advance. Observe, without a hypothesis to nullify.” – Rikard Edgren

“The questions we can answer with a yes or no, are probably those that don’t matter, or matter less.” – James Christie, paraphrased

Governance shouldn’t involve day to day operational management by full-time executives. – James Christie

“Comply or explain” vs. “comply or be damned”. – UK vs. US approach to auditing descirbed by James Christie

“Self-defense skill for testers: “citation needed”. Also: Curiosity, Skepticism, Tenacity. – Laurent Bossavit, paraphrased, warning against accepting unsubstantiated claims

“Do not accept mediocrity.” – Keith Klain

“Culture eats strategy for breakfast.” – Keith Klain

“When you start to think about automation for any other reason than to help testing, you might be boxing yourself in. – Iain McCowatt

“Rational thinking is good if you want rational results.” – Ian Rowland

“Thought Feeders.” – Michael Bolton proposed an improvement to the term Thought Leaders

The Good, the Bad and the Ugly

This was the best EuroSTAR to date in my experience. The program was better than ever and more diverse with a good mix of testing philosophies being represented. The facilitated discussions elevated the proceedings and prevented (most) speakers from running away from arguments, questions and contrasting ideas. I also liked that the community and social aspects of the conference appear to have been strengthened since my last EuroSTAR in 2009. The workshops, the do-over session and the community hub were all welcome additions to me. And the test lab looked as brilliant as ever, and I think it was a really neat idea to have it out in the open space it was in, rather than being locked away in a separate room. Expo space well used.

Camera Uploads

While I applaud the improvements, there are still things that bother me about some EuroSTAR fundamentals. The unreasonably large and hard to avoid Expo, which strangely enough is called “the true heart of the conference” in the conference pamphlet, is one such thing. Not having ample (or hardly any) opportunity to sit down and have my lunch at a table is another. Basic stuff, and I think the two are connected. Seated attendees wouldn’t be spending enough time in the Expo, so eating while standing is preferred to give the vendors enough face-time with attendees. To me, this is not only annoying, but I also think it’s actually a disadvantageous setup for both vendors and attendees. My advice: Have the Expo connected to the conference, but off to the side. Make it easy and fun for me to attend the Expo if I choose, but also easy for me to avoid. For attendees, the true heart of any conference is likely about conferring and we would appreciate having a truly free choice of where and how to spend our limited time at what was otherwise a great conference this year.

Oh, and Jerry Weinberg won a luminary award for his contributions to the field over the years. If you develop software and haven’t read his books yet, you’re missing out. He’s a legend, and rightly so. Just saying.

2013-11-06 20.21.49

Finally, if you haven’t had enough of EuroSTAR ramblings yet, my friend Carsten Feilberg has written a blog post of his own about his impressions at EuroSTAR that you can check out, or have a look at Kristoffer Nordström’s dito blog post.

A solution to a non-existent problem

Recently, there was a post made to the Swedish community site “TestZonen” with some information about how the work with the new ISO/IEC/IEEE 29119 standard for software testing is progressing. That post sparked a rather intense series of comments from different members of the Swedish testing community.

My personal stance on the whole thing has in the past been that I “don’t care”. I don’t care for it and I don’t care much about it. If you’re not familiar with the standard, here’s what the organization’s website says the goal of it is:

The aim of ISO/IEC/IEEE 29119 Software Testing is to provide one definitive standard for software testing that defines vocabulary, processes, documentation, techniques and a process assessment model for software testing that can be used within any software development life cycle.

That’s a pretty tall order if you ask me. And words like “definitive” definitely set off some alarm bells in my head. I also think it’s a near impossible goal to reach, at least if one by “standard” mean anything near what I interpret the word to mean. So when I say I don’t care, I partly mean that I think it’s a waste of time and won’t help the craft or the industry, so why bother? After having thought a bit more about it though, I’ve come to realize that I do care a little bit. Why? Because not only do I not think it would help our craft, I also think it can actually hurt the software testing industry. Now, of course you can always opt-out and simply not use the standard in your organization, but I don’t think it’s as easy as that for the industry as a whole.

What follows below is a not-quite-but-pretty-close-to literal translation of what my comments to the article were. It’s a bit long and maybe a bit of a rant, sorry about that. Look, even an introvert can go off on rants sometimes. It started off with me quoting a line from the article where the author gave some examples about what he thought the might be the outcome of the release of this new standard.

One possibility is that it will show up as a requirement, in procurement negotiations or when selecting partners, that you need to be certified according to ISO/IEC/IEEE 29119.

Yes, that’s one of the risks I see with this standard. As if it’s not bad enough that some buyers of testing services today are already throwing in must-have requirements that they don’t know the value (or cost) of. Here comes even more crap that a big number of test professionals must spend time and money getting certified on, or risk becoming disqualified when competing for contracts.

Another risk is that this will be yet another silver bullet or best practice that well-meaning but dangerously uninformed stakeholders will decide to implement rather than taking the time to understand what software development is really all about. Especially since this silver bullet has ISO and IEEE written all over it. How much will the time saved picking this “best practice” over developing a context-appropriate approach end up costing their shareholders in the end…?

Ok, so isn’t it then a good thing that we get a standard so that buyers and stakeholders can feel more secure and know what they’re betting their money on? No, not in this case. Software development is much too context-dependent for that. Just because there’s a standard out there that somebody’s cooked up, doesn’t mean that people will all the sudden become more knowledgeable about software testing. Instead, this will become an opportunity for people to keep ignoring to learn anything about all the choices and judgement calls that go into creating good software. Just use this standard and quality will flow forth from your projects, that’s all you need to know. I don’t see how it would ever be possible to automatically equate using this (or any other) standard with delivering valuable testing and useful information. But unfortunately I believe that’s what a lot of uninformed people will think it means.

What I think is needed instead is a continued dialogue with our stakeholders. We need to gain their understanding for our job and have them realize that creating software is a task filled with creativity, checking and exploration, and that we perform this exercise guided by sound principles and past experiences, but very rarely by standards. And we can of course talk about these principles and explain and defend them as well as use them when we’re asked to estimate amounts of work, or report on progress, or give a status report. But to standardize the practical application of a principle is as unthinkable to me as it is to tell a boxer that he or she will win against any opponent as long as a certain combination of punches is used, regardless of what the opponent is doing. But, teach the same boxer a few principles like how it’s generally a good idea to keep the guard up and keep moving, and their chance to win increases.

The principles and heuristics we use have very little to do with the kind of rigid process maps I find on the standard organization’s website though. I sincerely hope those are old drafts that have been discarded a long time ago. Judging by that material though, I think the best case scenario we’re looking at is that we’ll end up with a new alternative to RUP or some other fairly large and prescriptive process. And since this process will have ISO and IEEE in its name, it will likely be welcomed by many industries and once again they will go back to focusing on being “compliant” rather than making sure to deliver something valuable to customers. So again we’ll be back in that rigid grind where documents are being drawn up just for the heck of it, just because the process says so. Again.

Like many other comments to the article said: What’s the use of this standard? Where’s the value? Who is it really benefiting? To me, it seems like a solution to a non-existent problem!

The article asked if the standard will mean new opportunities or if it will mean that our everyday work will be less open to new ideas. I believe it will mean neither to me. Because I don’t see the opportunities, and I also don’t see that it’s even possible to do good work without being open to new ideas and having the freedom to try them out and then look back on the result and evaluate those ideas. What I see instead is yet another meaningless abbreviation that testers will get hit over the head with time and time again and that we will have to spend time and energy arguing against following, if we want to do valuable testing in our projects and organizations.

The standard is scheduled for release soon. You can choose to ignore it or you can choose to fight it. But please don’t adopt it. You’ll only make life harder for your peers and you’re unlikely to gain anything of value.

Trying on hats

After having missed out on a couple of EAST gatherings lately, I finally managed to make it to the this month’s meetup this past Thursday (the group’s 11th meetup since its inception, for those who like to keep scores). This meetup was a bit different that past ones, in a good way. Not that the other ones haven’t been good, but it’s fun to mix things up. The plan for the evening was to study, implement and evaluate a Edwards de Bono’s Six Thinking Hats technique in a testing situation. The six thinking hats is basically a tool to help both group discussions and individual thinking by using imagined (or real) hats of different colors to force your thinking in certain directions throughout a meeting or workshop. Another cool thing at this meetup was that there were at least a handful of new faces in the room. We’re contagious, yay!

We started out by watching Julian Harty’s keynote address from STARWEST 2008, “Six Thinking Hats for Software Testers”. In this talk, Julian explains how he had successfully implemented Edward de Bono’s technique when he was at Google and how it helped them getting rid of limiting ideas, poor communication, and pre-set roles and responsibilities in discussions and meetings.

So what can we use these hats for? Julian suggests a few areas in his talk:

  • Improving our working relations, by helping reduce the impact of adversarial relationships and in-fighting.
  • Reviewing artifacts like documents, designs, code, test plans and so on.
  • Designing test cases, where the tool helps us to ask questions from 6 distinct viewpoints.

Julian recommends starting and ending with the the Blue Hat, which is concerned with thinking about the big picture. Then continuing forward with the Yellow Hat, which symbolizes possibilities and optimism. The Red Hat, symbolizing passion and feelings. The White Hat, which calls for the facts and nothing but the facts (data). The Black Hat, the critiquing devil’s advocate hat, which looks out for dangers and risks. And finally, after going through all the other hats to help us understand the problem domain, we move on to the Green Hat, which let’s us get creative, brainstorm and use the power of “PO”.

PO stands for provocative operation and is another one of de Bono’s useful tools that helps us get out of ruts. If you find yourself stuck in a thinking pattern, you have someone throw in a PO, in order to help people get unstuck and think along new lines.

There are five different methods for generating a PO: Reversing, Exaggerating, Distorting, Escaping and Wishful Thinking. All of them encourages you to basically “unsettle your mind”, thereby increasing the chances that you will generate a new idea (a.k.a “movement” in the de Bono-verse). You can get a brief primer here if you’re interested in learning more, though I do recommend going straight for de Bono’s books instead. Now, we didn’t discuss PO much during the meetup, but it reminded me to go back and read up on these techniques afterwards. Would be fun to try out in sprint planning or when breaking down larger test ideas.

So after we’d watched the video through, we proceeded to test a little mobile cloud application that had been developed by a local company here in Linköping. The idea was to try to implement the six hats way of thinking while pair testing, which was a cool idea, but it soon became clear that we needed to tour the application a bit first in order to apply the six hats. Simply going through the six hats while trying to think about a problem domain you know nothing about didn’t really work. Also, bugs galore, so there wasn’t much really need to get creative about test ideas. Still, a good exercise that primed our thinking a bit.

Afterwards we debriefed the experience in the group and I think that most of us felt that this might be a useful tool to put in our toolbox, alongside other heuristics. When doing test planning for an application that you know a bit more about, it will probably be easier to do the six hats thinking up front. With an unknown application, you tend to fall back to using other heuristics and then putting your ideas into one of the six hats categories after the fact, rather than using the hats to come up with ideas.

I also think the six hats would be very useful together with test strategy heuristics like SFDPOT, examining each product element with the help of the hats, to give your thinking extra dimensions. Same principle as you would normally use with CRUSSPIC STMPL (the quality characteristics heuristic) together with SFDPOT. Or why not try all three at the same time?

As usual, a very successful and rewarding EAST meetup. Sitting down with peers in a relaxed environment (outside business hours) can really do wonders to get your mind going in new directions.

For a more in depth look on the original idea of the hats, see Edward de Bono’s books Six Thinking Hats (1985), or Lateral Thinking: A Textbook of Creativity (2009), which describes them pretty well as well if I remember correctly.

Edit: If you want to read less about the hats and more about how the meetup was actually structured (perhaps you want to start your own testing meetups?), head on over to Erik Brickarp’s blog post on this same meetup.

Do your best with what you get

A while back, I wrote a short response post to one of Huib Schoot’s blog posts on the difference between a mindset tentatively labeled “adaptability” and that of the context-driven tester. I submitted (as did Huib) that the thinking behind the label adaptability was very different from how context-driven testers think.

This post is about addressing another misunderstanding about context-driven testing that I’ve come to fear is worryingly common, namely the notion that for one reason or another, context-driven testers are incompatible with so called “agile testing”. If I were to address all potential sources for such a misunderstanding, this post would turn into a short book, so for now I’ll only focus on the misunderstanding that “context-driven testers work in an exclusively reactive fashion”.

I was reminded of this when I was reading Sigge Birgisson’s blog a couple of days ago. Sigge has recently published a nice series of comparative posts where he compares Agile Testing to a series of other perspectives on testing. The series is not complete yet (I think) and there are hints of a concluding post down the line, but there’s quite a lot of material to go through already. In particular, Sigge’s comparison with context-driven testing has proved thought provoking and has spawned quite a lot of discussion already, which is of course lovely to see. (Way to go, Sigge.)

One of the things discussed refers back to part of the commentary on the context-driven-testing.com website where the the guiding principles of the context-driven school is published in its original form.

At two points in the commentary following the principles, there is a statement that says that: “Ultimately, context-driven testing is about doing the best we can with what we get.” To me, there’s nothing strange or controversial with that commentary, but it can evidently be misinterpreted to include an implicit call to “accept the status quo” or “work only reactively, not proactively”. If that was true, then context-driven testers would indeed not work well in agile context. Fortunately, this is not the case. Of course, I didn’t write the commentary, but let me explain what interpretation I would propose makes more sense.

The “doing the best you can with what you get” commentary is meant to help illustrate one of the most basic context-driven principles: Context is what drives our selection of tools (or approaches, frameworks, process, techniques, etc.). We don’t force the context to comply to our favorite tools. If we enter a context with poorly written requirements, then no competent tester, regardless of label or affiliation, would say: “I can’t do any testing until I get proper requirements”. Instead, we’d do the best we can, given the situation we’re in.  Or put another way, instead of treating every problem as a nails just because you happen to love your hammer, examine the problem and try to find a reasonable way to either (1) solve the problem or (2) find a way to work around the problem until you are able to solve it. Use the WWMD heuristic: What Would MacGyver Do?

Depending on the rest of the context, in our “poor requirements” example that could include executing a couple of survey sessions, having yet another chat with the product owner or customer representatives, talking to other developers, go out hunting for other requirement sources than what we’ve been given (requirements document ≪ requirements!) or performing some claims testing… Basically whatever you need to do in order to deliver valuable information back to your team or your stakeholders. And naturally, we also work in parallel to try to improve the situation, together with our team. After all, what other alternative is there? (Apart from leaving the situation altogether and finding a new job.)

If I were to try to come to any sort of conclusion with this rambling, I would say that identifying yourself as a context-driven tester doesn’t say anything about how active a participant you will be in working with continuous improvement activities in the workplace, or whether you will just accept things as they are or work tirelessly to change them for the better. Neither does it say anything about your feelings about proactive testing activities. In fact, specific practices is not even something that the context-driven principles address. (Because they are principles! The value of any practice depends on its context!) Nevertheless, constantly trying to make a difference in the workplace, trying to get a testing perspective involved as early as possible in a project and striving to make the place we work in more harmonious and effective, is something I think everybody I know advocates, regardless of labels or affiliations.

All in in, while the context-driven community traditionally employs a somewhat different emphasis and/or perspective on testing than the agile community tend to do, they’re not incompatible. And although it has nothing to do with context-driven principles, the last thing a context-driven tester would object to is continuous improvement or proactive thinking about testing and checking.

If you think differently, then please do challenge me: Name a testing practice thought of as traditionally “agile” that a context-driven driven tester would reject out of hand, or even be unjustifiably suspicious against, and tell me why you think that is. If nothing else it should make for a good discussion.

Report from SWET4

The 4th SWET (Swedish Workshop on Exploratory Testing) happened this past weekend at Kilsbergen in Örebro, Sweden. The theme for this 4th edition of the workshop was “Exploratory Testing and Models”, hosted by Rikard Edgren, Tobbe Ryber and Henrik Emilsson (thanks guys!). If you haven’t heard of SWET before, a brief way of describing it would be to say that it’s a peer conference based on the LAWST format where we meet to discuss the ins and outs of Exploratory Testing in order to challenge each other and increase our own understanding of the topic. SWET has many siblings around the world and the family of peer conferences on software testing keeps on growing which is a delightful thing to see! Peer conferences rock. There’s no better way to learn new things about your craft in my mind, than to present an experience report and have it picked apart and challenged by your peers.

Friday (Pre-conference)
Most people arrived on the evening before and spent a couple of hours together eating dinner and chatting over a few drinks. The venue had a lovely common room with a cozy fireplace and comfy chairs so, as usual at these events, several people stayed up chatting happily well into the night without a care.

Saturday (Day 1)
The conference started off with a moment of silence for our friend and testing peer Ola Hyltén who recently passed away in a tragic car accident. Having met Ola myself for the first time at SWET2, that felt like an appropriate way of opening the conference. Then after a round of check-ins the schedule proceeded with the first experience report.

First up was Anna Elmsjö who talked about making use of business and process models. Anna described her process of questioning the diagrams and adding questions and information to the model to keep track of things she wanted to test. Open season contained an interesting thread about requirements where someone stated that it sounded as if Anna’s testing could be seen as a way of adding or sneaking in new requirements, or that someone might feel that she was. A comment on that question pointed out in turn that asking questions about the product doesn’t mean requirements are being added, but that they are being discovered, which is an important distinction to keep in mind in my opinion.

Second presentation was from Maria Kedemo, who talked about what she called model-based exploratory interviewing for hiring testers. Maria works as a test manager and has been heavily involved in recruiting for her employer during the past year. When preparing for the process of hiring, Maria explained, she drew on her testing experiences to see if she could identify some of her habits and skills as a tester and apply them to interviewing, e.g. different ways of searching for and finding new information. My take-aways include some thoughts on how modeling what you already have can help you find out what you really need (not just you want, or think you want). Also, a reaffirmation of the importance of updating your models as your understanding of what you’re modeling increases, sort of like how you would (hopefully) update a plan when reality changes.

Last presentation of the day, Saam Koroorian talked about using the system map, which is a model of the system, to drive testing. He also described how his organization has moved from what he called an activity or artifact-driven kind of testing to more information-driven testing. I interpreted these labels more as descriptors of how the surrounding organization would view testing. Either it’s viewed as an activity that is supposed to provide arbitrary measurements based on artifacts (like test cases) to show some kind of (false) progress, i.e. bad testing, or it’s viewed as an activity that is expected to provide information, i.e. better testing (or simply “testing”).

Saam continued to talk about how his team had adopted James Bach’s low-tech testing dashboard concepts of assessing and showing coverage levels and testing effort of different areas which led to many new green cards (new discussion threads). Among them was a thread of mine that discussed the importance of taking the time dimension into account and how to visualize “freshness” and reliability of information as time passes (assuming the system changes over time). This is something I’ve recently discussed with some other colleagues to solve a similar problem at a client which I found very stimulating. Might turn that into a blog post of its own one day (when the solution is finished).

Saam also noted that as his organization was moving towards an agile transition at the time, sneaking in new thinking and ideas in the testing domain was easier than usual, since the organization was already primed for change in general. Interesting strategy. Whatever works. 🙂

Lightning Talks
Day 1 was concluded  with a 60-minute round of lightning talks, which based on the number of speakers meant that each person got 5 minutes to run their presentations (including questions). Lots of interesting topics in rapid progression,  like an example of how to use free tools to create cheap throw-away test scripts as an exploration aid (James Bach) or how to use the HTSM quality characteristics to discuss quality with customers and figure out their priorities (Sigge Birgisson). Erik Brickarp gave Lightning Talk on visualization that he’s now turned into a blog post over at his blog. My own Lightning Talk was about helping testers break stale mental models and to get out of creative ruts through mix-up testing activities (a.k.a cross-team testing). If I’m not mistaken, I think all participants gave a Lightning Talk if they weren’t already scheduled to give a presentation which was nice. That way everybody got to share at least one or two of their ideas and experiences.

In the evening, the group shared a rather fantastic “Black Rock” dinner after which the discussions continued well into the wee hours of the night, despite my best efforts to get to bed at a reasonable hour for once.

Sunday (Day 2)
After check-in on day 2, the first order of business was to continue through the stack of remaining threads from Saam’s talk that we didn’t have time to get to the day before. I think this is a pretty awesome part of this conference format. Discussions continue until the topic is exhausted, even if we have to continue the following day. There’s no escape. 😉

The first (and only, as it turned out) presentation of day 2 came from James Bach who told a story about how he had done exploratory modeling of a class 3 medical device through the use of its low level design specification to come up with a basis for his subsequent test design. During open season we also got into a lot more information about his overarching test strategy. It was a fascinating story that I won’t go into much detail on here, but you should ask him to tell it to you if you get a chance. You’ll get a lot of aha! moments. My biggest takeaway from that open season discussions was a reaffirmation of something I’ve known for quite some time, but haven’t been able to put into words quite so succinctly: “Formal testing that’s any good is always based on informal testing”. Also worth considering: Informal testing is based in play. As is learning.

Formal testing is like the opening night of a big show. It becomes a success because(/if) it’s been rehearsed. And informal testing provides that rehearsal. Skip rehearsing at your peril.

So how do you go from playing into making formal models? You practice! And according to James, a good way to practice is to start by drawing state models of various systems. Like for instance this über-awesome Flash game. When you’ve modeled the game, you can start to play around with it in order to start generating a rich set of test ideas. Asking “what if”-style questions like “What happens if I go from here to here?” or “I seem to be able to do this action over here, I wonder if I can do it over here as well?” and so on. What factors exist, what factors can exist, which factors matter?

I want to finish off with a final couple of quick take-aways from the weekend. First, a “test case” can be defined as an instance or variation of a test or test idea. By using that definition you’ll be able to encompass many or most of the varying things and containers that people call test cases. And finally, regarding requirements… Challenge the assumption that tests can be derived from the requirements. The tests aren’t in the requirements and thus can’t be derived from them. You can however, construct tests that are relevant in order to test the requirements and obtain information about the product, usually based on risk. While on the subject remember that, usually, requirements > requirements document.

SWET4

Thank you to all the participants at SWET4: Anna Elmsjö, Simon Morley, Tobbe Ryber, Oscar Cosmo, Erik Brickarp, James Bach, Johan Jonasson, Sigge Birgisson, Maria Kedemo, Rikard Edgren, Joakim Thorsten, Martin Jansson, Saam Koroorian, Sandra Camilovic and Henrik Emilsson.

That’s it. That’s all that happened. (No, not really, but I’ll have to save some things for later posts!)

EAST meetup #7

Last night, EAST (the local testing community in Linköping) had its 7th “official” meetup (not counting summer pub crawls and the improvised restaurant meetup earlier this fall). A whopping 15 people from opted to prolong their workday by a few hours and gather to talk about testing in inside Ericsson’s facilities in Mjärdevi (hosting this time, thanks to Erik Brickarp). Here’s a short account of what went down.

First presentation of the night was me talking about the past summer’s CAST conference and my experiences from that. The main point of the presentation was to give people who didn’t know about CAST before an idea of what makes CAST different from “other conferences” and why it might be worth considering attending from a professional development standpoint. CAST is the conference of the Association for Software Testing. A non-profit organization with a community made up lots of cool people and thinking testers. That alone usually makes the conference worth attending. But, naturally I’m a bit biased.

If you want to know more about CAST, you can find some general information on the AST web and CAST 2012 in particular has been blogged about by several people, including myself.

Second presentation was from Victoria Jonsson and Jakob Bernhard who gave their experience report from the course “The Whole Team Approach to Agile Testing” with Janet Gregory that they had attended a couple of months ago in Gothenburg.

There were a couple of broad topics covered. All had a hint of the agile testing school to them, but from the presentation and discussions that followed, I got the impression that the “rules” had been delivered as good rather than best practices, with a refreshingly familiar touch of “it depends”. A couple of the main topics (as I understood them) were:

  • Test automation is mandatory for agile development
    • Gives more time for testers to do deeper manual testing and focus on what they do best (explore).
    • Having releases often is not possible without an automated regression test suite.
    • Think of automated tests as living documentation.
  • Acceptance Testing could/should drive development
    • Helps formulating the “why”.
    • [Comment from the room]: Through discussion, it also helps with clarifying what we mean by e.g. “log in” in a requirement like “User should be able to log in”.
  • Push tests “lower” and “earlier”
    • Aim to support the development instead of breaking the product [at least early on, was my interpretation].
    • [Discussion in the room]: This doesn’t mean that critical thinking has to be turned off while supporting the team. Instead of breaking the product, transfer the critical thinking elsewhere e.g. the requirements/user stories and analyze critically, asking “what if” questions.
    • Unit tests should take care of task level testing, Acceptance tests handles story level testing and GUI-tests should live on a feature level. [Personally, and that was also the reaction of some people in the room, this sounds a bit simplified. Might not be meant to be taken literally.]

There was also a discussion about test driven development and some suggestions of good practices came up, like for instance how testers on agile teams should start a sprint by discussing test ideas with the programmer(s), outlining the initial test plan for them. That way, the programmer(s) can use those ideas, together with their own unit tests, as checks to drive their design and potentially prevent both low and high level bugs in the process. In effect, this might also help the tester receive “working software” that is able to withstand more sapient exploratory testing and the discussion process itself also helps to remove confusion and assumptions surrounding the requirements that might differ between team members. Yep, communication is good.

All in all, a very pleasant meetup. If you’re tester working in the region (or if you’re willing to travel) and want to join for the next meetup, drop me an e-mail or comment here on the blog and I’ll provide information and give you a heads up when the next date is scheduled.

Eulogy for Ola

This past Wednesday, our community was reached by the sad news that our friend Ola Hyltén had passed away. I was out of the country when I first heard the news, and though I’m back home now, I’m still having trouble coming to terms with the fact. I wasn’t planning on writing anything about this at first, and several others have already expressed their feelings better than I could hope to do in blogs, comments and tweets. But now I’m thinking that I might write a few short lines anyway, mostly for my own sake, in hope that it might provide some cathartic effect.

I didn’t know Ola up until a couple of years ago. I had heard of him before that, but it wasn’t until SWET2 that I got to meet him in person. I found him to be a very likable guy who got along well with everybody and who always seemed to be just a second away from saying something funny or burst out laughing himself. After SWET2, I kept running in to Ola every now and then, for instance at a local pub gathering for testers down in Malmö and later at SWET3. However, it was during our time together on the Let’s Test conference committee, leading up to Let’s Test 2012, that I really got to know him. Ola always seemed to be easygoing, even when things were going against him and it was easy, even for an introvert like myself, to slip into long and entertaining conversations with him about everything and nothing.

I considered Ola to be one of the more influential Swedish testers in recent years, and it’s an influence that we as a community will surely miss having around. We’ve lost a friend, a great conversationalist and a valued colleague and I know it will take a lot of time still before that fact truly sinks in for me.

My condolences goes out to Ola’s family and close friends as they go through this difficult time.

More eulogies for Ola can be found here, here, here and here.

I’m a sucker for analogies

I love analogies. I learn a lot from them and I use them a lot myself to teach others about different things. Sure, even a good analogy is not the same as evidence of something, and if  taken too far, analogies can probably do more harm than good (e.g. “The software industry is a lot like the manufacturing industry, because… <insert far fetched similarity of choice>”). However, I find that the main value of analogies is not that they teach us “truths”, but rather that they help us think about problems from different angles, or help illustrate thinking behind new ideas.

I came across such an analogy this morning in a mail list discussion about regression testing. One participant offered a new way of thinking about the perceived problem of keeping old regression tests updated, in this way: “Pause for a moment and ask… why should maintenance of old tests be happening at all? […] To put it another way, why ask old questions again? We don’t give spelling tests to college students […]”

I like that analogy – spelling tests to college students. If our software has matured past a certain point, then why should we go out of our way to keep checking that same old, unchanged functionality in the same way as we’ve done a hundred times before? Still, the point was not “stop asking old questions”, but rather an encouragement to examine our motivations and think about possible alternatives.

A reply in that same thread made a point that their regression tests were more like blood tests than like spelling tests. The analogy there: Just because a patient “passes” a blood test today, doesn’t mean it’s pointless for the physician to draw blood on the patient’s next visit. Even if the process of drawing blood is the same every time, the physician can choose to screen for a single problem, or multiple problems, based on symptoms or claims made by the patient. Sort of like how a tester can follow the same path through a program twice but vary the data.

So what does this teach us about testing? Again, analogies rarely teach us any hard truths, but they serve as useful stimuli and help us think from new angles. I use them as I use any other heuristic methods. So with this spelling test/blood test analogy in mind, I start to  think about the test ideas I have lined up for the coming few days at work. Are most of them going to be like spelling tests and if so, can I still make a good argument for why those would be the best use of my time? Or are there a few ideas in there that could work like blood tests? If so, what qualifies them as such and can I improve their screening capability even further in some way (e.g. vary the data)?

Like I said earlier, I came across this analogy just this morning, which means I’m probably not really done thinking about it myself yet, but I thought it worth sharing nonetheless. Much like cookies, sometimes a half-baked thought is even better than the real thing. Or at least better than no cookie at all. So here it is. And with that analogy, or maybe with this one below, I bid you a good day.

XKCD: Analogies