Solving the real problem

This is a continuation of my post from back in August on the ISO/IEC/IEEE 29119 software testing standard in which I argued that the upcoming standard is a rather bad idea. This post is more about what I think we should do instead, to solve the problem the new ISO standard is failing to solve.

Another comment to the article I mentioned in my previous post asked whether or not it would be feasible to arrive at a better software testing standard through an open participation wiki page where a democratic process would be applied, and in the end get a consensus and gain widespread acceptance? My reply in short was: No, I don’t think that’s possible. Open and democratic processes are nice, but they don’t guarantee consensus. In fact, they rarely lead down that path.

But I’m not just being bleak and pessimistic here. There are other ways of achieving wide spread acceptance of ideas while also increasing our common testing toolbox!

My take on it is that we should instead (continue to) do the opposite of committee wrangling. That is, don’t worry about consensus up front. Just keep on noticing good practices that you come across yourself and share them. Codify them, create frameworks and talk about them with other testers, at conferences, on blogs, in white papers. Describe the problem and describe how it was solved, and then leave it in the hands of the global testing community to decide what to do with your ideas.

Ideas that are good will get adopted and will continue to evolve. In some cases they’ll become so universally applicable that they’ll start to “feel” almost like standards, at least for solving very specific problems. Nothing wrong with that. If it works, it works. As long as it’s still ok to question the practice and to leave bits out or add bits to it when the context requires it, and when it makes sense. And with ideas like the ones I’m talking about, experimentation is always ok, and even encouraged.

Ideas that turn out to not work, or that only work in narrow circumstances, won’t get picked up by the community at large. But they can still be a valid solution in certain niche situations. Keep on sharing those ideas too! Context matters and one day your context might require you to implement that weird idea you heard about when attending your last conference that you thought sounded ridiculous or that you’d never bother trying out.

What I don’t think works though, is to design a catch-all solution for every testing problems at once, from scratch, and by committee. Which is why I ranted about the ISO/IEC/IEEE 29119 in my previous post.

As an illustration, take a relatively small framework, like the session-based test management framework in its original form. Imagine what would have become of the session-based testing framework if its originators had decided to call together 20 people to come up with a consensus solution for (e.g.) adding more traceability to exploratory testing. My guess is that it would have been dead in the water before they were even able to agree on what traceability means. What happened in reality was that a basic framework was created by James and Jon Bach to solve a real-world problem and it was then presented at STARWest in 2000 and published as articles in different software engineering magazines. It gained popularity among practitioners and is today a widely known approach that others have adopted and modified to fit their contexts where needed. Peer reviewing at its best.

Session-based Test Management, or how to “manage testing based on sessions” as Carsten Feilberg (@Carsten_F) aptly described it once, has become one of several “standard-like” ways of approaching certain exploratory testing problems. The framework doesn’t have an authoritative organization controlling it though. If you want to measure something other than the ratio of time spent in setup, testing and bug reporting for instance, you can. Heck, you can decide to add or remove whatever you want if you think it will add value. Nobody’s going to stop you from trying. If you do, and if it turns out to add value, your peers will applaud you for adding to the craft rather than bashing you for going against the session-based testing “standard”. And if it doesn’t work, we’ll still applaud you for trying (and for showing us what not to do).

And therein lies my point I believe. I’m all for trying to gather our combined testing knowledge. Do it, and share it! I just don’t think it should be codified by a committee and I don’t believe it’s productive to have authoritative organizations trying to control how to apply that knowledge, or deciding what gets put in and what gets left out, regardless of how well-meaning that organization may have started out. It’s impractical and it slows down progress.

Martin Jansson (@martin_jansson) suggests a somewhat extreme, but interesting comparison here (in Swedish), which is to compare how some of us are fighting against standards in the testing world, with the difficulties Galileo went through when he tried to get the concept of heliocentrism accepted into the “standard” world view of the committee known as the Catholic Church back in the 17th century. Out situation is definitely better than that of Galileo, but still, there might be one or two amusing similarities to contemplate there.

As testers, semi-dogmatic organizations like the ISTQB are already out there, using different schemes and big dog tactics to try to dictate how testing should be done or how we should think. I believe we can do without adding more ammunition like a new ISO standard to their arsenal.

New additions to our common testing toolbox are added everyday by serious and passionate test practitioners. Anybody who calls themselves a professional tester should be expected to be able to pick the best possible tools for their context from that toolbox instead of looking for silver bullets by adopting an all-encompassing solution designed and controlled by a committee.

I think we need a greater number of responsible, creative individuals who are humble enough to draw on relevant experiences from other people, but who are also brave enough to take the risk of thinking for themselves and make their own decisions about what works or not for their particular problems. And then, after the project’s done, share their experiences with their peers. That is, I believe, how we solve the real problem of organizations and teams wanting to learn more about how to organize their testing.

Finally, and although I mentioned it in my EuroSTAR post just a few days ago, I think it’s worth once again to point you to James Christie’s take on testing standards. He hits the nail right on the head in my opinion, and expands on many of my own concerns in a very nice way.

P.S. If you’re in an idea sharing mood. The call for proposals for Let’s Test Oz 2014 is open until January 15th 2014. As an added bonus, you might get a good excuse to visit Sydney! Go for it!

Leave a Reply

Your email address will not be published. Required fields are marked *