Monthly Archives: November 2013

Solving the real problem

This is a continuation of my post from back in August on the ISO/IEC/IEEE 29119 software testing standard in which I argued that the upcoming standard is a rather bad idea. This post is more about what I think we should do instead, to solve the problem the new ISO standard is failing to solve.

Another comment to the article I mentioned in my previous post asked whether or not it would be feasible to arrive at a better software testing standard through an open participation wiki page where a democratic process would be applied, and in the end get a consensus and gain widespread acceptance? My reply in short was: No, I don’t think that’s possible. Open and democratic processes are nice, but they don’t guarantee consensus. In fact, they rarely lead down that path.

But I’m not just being bleak and pessimistic here. There are other ways of achieving wide spread acceptance of ideas while also increasing our common testing toolbox!

My take on it is that we should instead (continue to) do the opposite of committee wrangling. That is, don’t worry about consensus up front. Just keep on noticing good practices that you come across yourself and share them. Codify them, create frameworks and talk about them with other testers, at conferences, on blogs, in white papers. Describe the problem and describe how it was solved, and then leave it in the hands of the global testing community to decide what to do with your ideas.

Ideas that are good will get adopted and will continue to evolve. In some cases they’ll become so universally applicable that they’ll start to “feel” almost like standards, at least for solving very specific problems. Nothing wrong with that. If it works, it works. As long as it’s still ok to question the practice and to leave bits out or add bits to it when the context requires it, and when it makes sense. And with ideas like the ones I’m talking about, experimentation is always ok, and even encouraged.

Ideas that turn out to not work, or that only work in narrow circumstances, won’t get picked up by the community at large. But they can still be a valid solution in certain niche situations. Keep on sharing those ideas too! Context matters and one day your context might require you to implement that weird idea you heard about when attending your last conference that you thought sounded ridiculous or that you’d never bother trying out.

What I don’t think works though, is to design a catch-all solution for every testing problems at once, from scratch, and by committee. Which is why I ranted about the ISO/IEC/IEEE 29119 in my previous post.

As an illustration, take a relatively small framework, like the session-based test management framework in its original form. Imagine what would have become of the session-based testing framework if its originators had decided to call together 20 people to come up with a consensus solution for (e.g.) adding more traceability to exploratory testing. My guess is that it would have been dead in the water before they were even able to agree on what traceability means. What happened in reality was that a basic framework was created by James and Jon Bach to solve a real-world problem and it was then presented at STARWest in 2000 and published as articles in different software engineering magazines. It gained popularity among practitioners and is today a widely known approach that others have adopted and modified to fit their contexts where needed. Peer reviewing at its best.

Session-based Test Management, or how to “manage testing based on sessions” as Carsten Feilberg (@Carsten_F) aptly described it once, has become one of several “standard-like” ways of approaching certain exploratory testing problems. The framework doesn’t have an authoritative organization controlling it though. If you want to measure something other than the ratio of time spent in setup, testing and bug reporting for instance, you can. Heck, you can decide to add or remove whatever you want if you think it will add value. Nobody’s going to stop you from trying. If you do, and if it turns out to add value, your peers will applaud you for adding to the craft rather than bashing you for going against the session-based testing “standard”. And if it doesn’t work, we’ll still applaud you for trying (and for showing us what not to do).

And therein lies my point I believe. I’m all for trying to gather our combined testing knowledge. Do it, and share it! I just don’t think it should be codified by a committee and I don’t believe it’s productive to have authoritative organizations trying to control how to apply that knowledge, or deciding what gets put in and what gets left out, regardless of how well-meaning that organization may have started out. It’s impractical and it slows down progress.

Martin Jansson (@martin_jansson) suggests a somewhat extreme, but interesting comparison here (in Swedish), which is to compare how some of us are fighting against standards in the testing world, with the difficulties Galileo went through when he tried to get the concept of heliocentrism accepted into the “standard” world view of the committee known as the Catholic Church back in the 17th century. Out situation is definitely better than that of Galileo, but still, there might be one or two amusing similarities to contemplate there.

As testers, semi-dogmatic organizations like the ISTQB are already out there, using different schemes and big dog tactics to try to dictate how testing should be done or how we should think. I believe we can do without adding more ammunition like a new ISO standard to their arsenal.

New additions to our common testing toolbox are added everyday by serious and passionate test practitioners. Anybody who calls themselves a professional tester should be expected to be able to pick the best possible tools for their context from that toolbox instead of looking for silver bullets by adopting an all-encompassing solution designed and controlled by a committee.

I think we need a greater number of responsible, creative individuals who are humble enough to draw on relevant experiences from other people, but who are also brave enough to take the risk of thinking for themselves and make their own decisions about what works or not for their particular problems. And then, after the project’s done, share their experiences with their peers. That is, I believe, how we solve the real problem of organizations and teams wanting to learn more about how to organize their testing.

Finally, and although I mentioned it in my EuroSTAR post just a few days ago, I think it’s worth once again to point you to James Christie’s take on testing standards. He hits the nail right on the head in my opinion, and expands on many of my own concerns in a very nice way.

P.S. If you’re in an idea sharing mood. The call for proposals for Let’s Test Oz 2014 is open until January 15th 2014. As an added bonus, you might get a good excuse to visit Sydney! Go for it!

Report from EuroSTAR 2013

I had the opportunity to speak at EuroSTAR this year, which made the decision to go a bit easier than it normally is. After all, EuroSTAR is a pretty pricey party to attend compared to many other conferences, such as e.g. Øredev which ran almost in parallel with EuroSTAR this year.

Anyway, this is a brief report from the conference with some of my personal take-aways, impressions and opinions about the whole thing.

People

First of all, to me, conferences is about the people you meet there. Sure, it’s good if there’s an engaging program and properly engaged speakers, but my main take-aways are usually from the hallway hangouts or late night discussions with whoever happens to be up for a chat. This year, I think the social aspect at EuroSTAR was great. I’ve been to EuroSTAR twice before, in 2008 and 2009, but this was the first one where I didn’t think that the size of the conference got in the way of meeting new and old friends. It also made a bit proud to see that the actual discussions and open seasons seemed pretty much dominated by the people in my community this year. This I think has to do with the way we normally interact with each other, and not because of any bias in the program. On the contrary, I think the program committee had put together a very well-balanced program with a lot of different view and testing philosophies being represented.

Tutorials

The first day and a half at EuroSTAR were devoted to tutorials. I rarely attend tutorials, unless I know they will be highly experiential, but this year I opted for one with relevance to my current testing field, medical software, namely “Questioning Auditors Questioning Testing, Or How To Win Friends And Influence Auditors” with James Christie. My main take-aways were not about how to relate to auditors though, but rather how to think about risk. James pointed out that a lot of times, we use variations of this traditional model to assess risk:

Risk Matrix

The problem with that model is that it scores high impact/low probability risks and low impact/high probability risks the same. Sure, if something is likely to happen we’d probably want to take care of that risk, even if it has only “low” impact. But is that really as important as fixing something that would be catastrophic but has only a small risk probability of happening? Sometimes yes, sometimes no, right? Either way, the model is too simplistic. The problem lies in our (in)ability to perceive and assess risk. Something I think is illustrated quite nicely in the following table.


O’Riordan, T, and Cox, P. 2001. Science, Risk, Uncertainty and Precaution. University of Cambridge.

This is an area I feel I want to dig deeper into. If you have any tips on reading material, please share in the comments.

By the way, James Christie also has a blog that I’ve started reading myself quite recently. His latest blog post is a real nugget for sure: Testing standards? Can we do better?

Keynotes & Sessions

New for this year of EuroSTAR is that the conference chair (Michael Bolton) had pushed for the use of K-cards and facilitated discussions after each talk and keynote. 30 minutes talks, 15 minutes of open season Q&A. Nice! I think that’s a very important improvement for EuroSTAR (though full hour slots would be even better). I mean, if you’re not given the opportunity to challenge a speaker on what he/she is saying, then what’s the point? Argument is a very important tool if we want to move our field forward, and it’s so rare that we in the global testing community get to argue face to face. We need facilitated discussions at every conference, not just a few. I’m glad to see that EuroSTAR is adopting what started at the LAWST peer workshops, and I do hope they stick with it!

All in all, I think the best sessions out of those I attended were:

Laurent Bossavit, who made a strong case against accepting unfounded claims. He did for instance bring up the age old “truth” about how fixing a bug becomes exponentially more expensive as it escapes from the requirements phase into the design phase (and so on) of a project. Turns out, the evidence for that truth is fairly poor, and only applies to certain types of bugs.

Keith Klain, who talked about overcoming organizational biases. His 5 point to follow when attempting to change company culture: 1. Determine your value system. 2. Define principles underpinned by your values. 3. Create objectives aligned to your business. 4. Be continually self-reflective. 5. Do not accept mediocrity. Changing culture is hard, but you might want (or need) to do it anyway. If you do, keep in mind point number 6: Manage your own expectations.

Ian Rowland, who gave a very entertaining talk about the power of IT, “Impossible Thinking”. Seemingly similar to lateral thinking (Edward DeBono), Impossible Thinking challenges you to not stop thinking about something just because it appears impossible, but rather move past that limitation and think about how the impossible might become possible. The thinking style can also be used to provoke creative thinking or new solutions, like how thinking about a phone that can only call 5 different phone numbers (a ridiculous idea at first glance) provoked the creation of a mobile subscription plan that let you call 5 friends free of charge. An idea that allegedly boosted sales for that particular carrier in a way that left competitors playing catch-up for months.

Rob Lambert, gave an experience report where he described in detail his company’s journey moving from releasing only a couple of times per year, down to releasing once per week. It was a very compelling story, but unfortunately I find myself currently working in a very different context. True experience reports are always a treat though.

Then of course I had my own presentation: Test Strategy – Why Should You Care?, where I tried to expand a bit on four main points: 1. Why most strategies I’ve seen are terrible and not helpful. 2. A model for thinking about strategies in a way that they might become helpful. 3. Characteristics of good strategy. 4. Arguments why you should care about good strategy. All in all, apart from maybe trying to pack too much into 30 minutes, I think it went ok. The room was packed too, which was nice.

I did sample quite a number of other sessions too, but it’s difficult to sum them all up with any sort of brevity so I won’t even try. Instead, I’ll provide a few quote from the talks I found the most rewarding:

“If it hurts, keep doing it.” – Rob Lambert (Learning/change heuristic)

Condense all the risks of the corporation into a single metric.” – Rick Buy, Enron (anti-pattern)

“Reality isn’t binary. We don’t know everything in advance. Observe, without a hypothesis to nullify.” – Rikard Edgren

“The questions we can answer with a yes or no, are probably those that don’t matter, or matter less.” – James Christie, paraphrased

Governance shouldn’t involve day to day operational management by full-time executives. – James Christie

“Comply or explain” vs. “comply or be damned”. – UK vs. US approach to auditing descirbed by James Christie

“Self-defense skill for testers: “citation needed”. Also: Curiosity, Skepticism, Tenacity. – Laurent Bossavit, paraphrased, warning against accepting unsubstantiated claims

“Do not accept mediocrity.” – Keith Klain

“Culture eats strategy for breakfast.” – Keith Klain

“When you start to think about automation for any other reason than to help testing, you might be boxing yourself in. – Iain McCowatt

“Rational thinking is good if you want rational results.” – Ian Rowland

“Thought Feeders.” – Michael Bolton proposed an improvement to the term Thought Leaders

The Good, the Bad and the Ugly

This was the best EuroSTAR to date in my experience. The program was better than ever and more diverse with a good mix of testing philosophies being represented. The facilitated discussions elevated the proceedings and prevented (most) speakers from running away from arguments, questions and contrasting ideas. I also liked that the community and social aspects of the conference appear to have been strengthened since my last EuroSTAR in 2009. The workshops, the do-over session and the community hub were all welcome additions to me. And the test lab looked as brilliant as ever, and I think it was a really neat idea to have it out in the open space it was in, rather than being locked away in a separate room. Expo space well used.

Camera Uploads

While I applaud the improvements, there are still things that bother me about some EuroSTAR fundamentals. The unreasonably large and hard to avoid Expo, which strangely enough is called “the true heart of the conference” in the conference pamphlet, is one such thing. Not having ample (or hardly any) opportunity to sit down and have my lunch at a table is another. Basic stuff, and I think the two are connected. Seated attendees wouldn’t be spending enough time in the Expo, so eating while standing is preferred to give the vendors enough face-time with attendees. To me, this is not only annoying, but I also think it’s actually a disadvantageous setup for both vendors and attendees. My advice: Have the Expo connected to the conference, but off to the side. Make it easy and fun for me to attend the Expo if I choose, but also easy for me to avoid. For attendees, the true heart of any conference is likely about conferring and we would appreciate having a truly free choice of where and how to spend our limited time at what was otherwise a great conference this year.

Oh, and Jerry Weinberg won a luminary award for his contributions to the field over the years. If you develop software and haven’t read his books yet, you’re missing out. He’s a legend, and rightly so. Just saying.

2013-11-06 20.21.49

Finally, if you haven’t had enough of EuroSTAR ramblings yet, my friend Carsten Feilberg has written a blog post of his own about his impressions at EuroSTAR that you can check out, or have a look at Kristoffer Nordström’s dito blog post.