Trying on hats

After having missed out on a couple of EAST gatherings lately, I finally managed to make it to the this month’s meetup this past Thursday (the group’s 11th meetup since its inception, for those who like to keep scores). This meetup was a bit different that past ones, in a good way. Not that the other ones haven’t been good, but it’s fun to mix things up. The plan for the evening was to study, implement and evaluate a Edwards de Bono’s Six Thinking Hats technique in a testing situation. The six thinking hats is basically a tool to help both group discussions and individual thinking by using imagined (or real) hats of different colors to force your thinking in certain directions throughout a meeting or workshop. Another cool thing at this meetup was that there were at least a handful of new faces in the room. We’re contagious, yay!

We started out by watching Julian Harty’s keynote address from STARWEST 2008, “Six Thinking Hats for Software Testers”. In this talk, Julian explains how he had successfully implemented Edward de Bono’s technique when he was at Google and how it helped them getting rid of limiting ideas, poor communication, and pre-set roles and responsibilities in discussions and meetings.

So what can we use these hats for? Julian suggests a few areas in his talk:

  • Improving our working relations, by helping reduce the impact of adversarial relationships and in-fighting.
  • Reviewing artifacts like documents, designs, code, test plans and so on.
  • Designing test cases, where the tool helps us to ask questions from 6 distinct viewpoints.

Julian recommends starting and ending with the the Blue Hat, which is concerned with thinking about the big picture. Then continuing forward with the Yellow Hat, which symbolizes possibilities and optimism. The Red Hat, symbolizing passion and feelings. The White Hat, which calls for the facts and nothing but the facts (data). The Black Hat, the critiquing devil’s advocate hat, which looks out for dangers and risks. And finally, after going through all the other hats to help us understand the problem domain, we move on to the Green Hat, which let’s us get creative, brainstorm and use the power of “PO”.

PO stands for provocative operation and is another one of de Bono’s useful tools that helps us get out of ruts. If you find yourself stuck in a thinking pattern, you have someone throw in a PO, in order to help people get unstuck and think along new lines.

There are five different methods for generating a PO: Reversing, Exaggerating, Distorting, Escaping and Wishful Thinking. All of them encourages you to basically “unsettle your mind”, thereby increasing the chances that you will generate a new idea (a.k.a “movement” in the de Bono-verse). You can get a brief primer here if you’re interested in learning more, though I do recommend going straight for de Bono’s books instead. Now, we didn’t discuss PO much during the meetup, but it reminded me to go back and read up on these techniques afterwards. Would be fun to try out in sprint planning or when breaking down larger test ideas.

So after we’d watched the video through, we proceeded to test a little mobile cloud application that had been developed by a local company here in Linköping. The idea was to try to implement the six hats way of thinking while pair testing, which was a cool idea, but it soon became clear that we needed to tour the application a bit first in order to apply the six hats. Simply going through the six hats while trying to think about a problem domain you know nothing about didn’t really work. Also, bugs galore, so there wasn’t much really need to get creative about test ideas. Still, a good exercise that primed our thinking a bit.

Afterwards we debriefed the experience in the group and I think that most of us felt that this might be a useful tool to put in our toolbox, alongside other heuristics. When doing test planning for an application that you know a bit more about, it will probably be easier to do the six hats thinking up front. With an unknown application, you tend to fall back to using other heuristics and then putting your ideas into one of the six hats categories after the fact, rather than using the hats to come up with ideas.

I also think the six hats would be very useful together with test strategy heuristics like SFDPOT, examining each product element with the help of the hats, to give your thinking extra dimensions. Same principle as you would normally use with CRUSSPIC STMPL (the quality characteristics heuristic) together with SFDPOT. Or why not try all three at the same time?

As usual, a very successful and rewarding EAST meetup. Sitting down with peers in a relaxed environment (outside business hours) can really do wonders to get your mind going in new directions.

For a more in depth look on the original idea of the hats, see Edward de Bono’s books Six Thinking Hats (1985), or Lateral Thinking: A Textbook of Creativity (2009), which describes them pretty well as well if I remember correctly.

Edit: If you want to read less about the hats and more about how the meetup was actually structured (perhaps you want to start your own testing meetups?), head on over to Erik Brickarp’s blog post on this same meetup.

Do your best with what you get

A while back, I wrote a short response post to one of Huib Schoot’s blog posts on the difference between a mindset tentatively labeled “adaptability” and that of the context-driven tester. I submitted (as did Huib) that the thinking behind the label adaptability was very different from how context-driven testers think.

This post is about addressing another misunderstanding about context-driven testing that I’ve come to fear is worryingly common, namely the notion that for one reason or another, context-driven testers are incompatible with so called “agile testing”. If I were to address all potential sources for such a misunderstanding, this post would turn into a short book, so for now I’ll only focus on the misunderstanding that “context-driven testers work in an exclusively reactive fashion”.

I was reminded of this when I was reading Sigge Birgisson’s blog a couple of days ago. Sigge has recently published a nice series of comparative posts where he compares Agile Testing to a series of other perspectives on testing. The series is not complete yet (I think) and there are hints of a concluding post down the line, but there’s quite a lot of material to go through already. In particular, Sigge’s comparison with context-driven testing has proved thought provoking and has spawned quite a lot of discussion already, which is of course lovely to see. (Way to go, Sigge.)

One of the things discussed refers back to part of the commentary on the context-driven-testing.com website where the the guiding principles of the context-driven school is published in its original form.

At two points in the commentary following the principles, there is a statement that says that: “Ultimately, context-driven testing is about doing the best we can with what we get.” To me, there’s nothing strange or controversial with that commentary, but it can evidently be misinterpreted to include an implicit call to “accept the status quo” or “work only reactively, not proactively”. If that was true, then context-driven testers would indeed not work well in agile context. Fortunately, this is not the case. Of course, I didn’t write the commentary, but let me explain what interpretation I would propose makes more sense.

The “doing the best you can with what you get” commentary is meant to help illustrate one of the most basic context-driven principles: Context is what drives our selection of tools (or approaches, frameworks, process, techniques, etc.). We don’t force the context to comply to our favorite tools. If we enter a context with poorly written requirements, then no competent tester, regardless of label or affiliation, would say: “I can’t do any testing until I get proper requirements”. Instead, we’d do the best we can, given the situation we’re in.  Or put another way, instead of treating every problem as a nails just because you happen to love your hammer, examine the problem and try to find a reasonable way to either (1) solve the problem or (2) find a way to work around the problem until you are able to solve it. Use the WWMD heuristic: What Would MacGyver Do?

Depending on the rest of the context, in our “poor requirements” example that could include executing a couple of survey sessions, having yet another chat with the product owner or customer representatives, talking to other developers, go out hunting for other requirement sources than what we’ve been given (requirements document ≪ requirements!) or performing some claims testing… Basically whatever you need to do in order to deliver valuable information back to your team or your stakeholders. And naturally, we also work in parallel to try to improve the situation, together with our team. After all, what other alternative is there? (Apart from leaving the situation altogether and finding a new job.)

If I were to try to come to any sort of conclusion with this rambling, I would say that identifying yourself as a context-driven tester doesn’t say anything about how active a participant you will be in working with continuous improvement activities in the workplace, or whether you will just accept things as they are or work tirelessly to change them for the better. Neither does it say anything about your feelings about proactive testing activities. In fact, specific practices is not even something that the context-driven principles address. (Because they are principles! The value of any practice depends on its context!) Nevertheless, constantly trying to make a difference in the workplace, trying to get a testing perspective involved as early as possible in a project and striving to make the place we work in more harmonious and effective, is something I think everybody I know advocates, regardless of labels or affiliations.

All in in, while the context-driven community traditionally employs a somewhat different emphasis and/or perspective on testing than the agile community tend to do, they’re not incompatible. And although it has nothing to do with context-driven principles, the last thing a context-driven tester would object to is continuous improvement or proactive thinking about testing and checking.

If you think differently, then please do challenge me: Name a testing practice thought of as traditionally “agile” that a context-driven driven tester would reject out of hand, or even be unjustifiably suspicious against, and tell me why you think that is. If nothing else it should make for a good discussion.

Report from SWET4

The 4th SWET (Swedish Workshop on Exploratory Testing) happened this past weekend at Kilsbergen in Örebro, Sweden. The theme for this 4th edition of the workshop was “Exploratory Testing and Models”, hosted by Rikard Edgren, Tobbe Ryber and Henrik Emilsson (thanks guys!). If you haven’t heard of SWET before, a brief way of describing it would be to say that it’s a peer conference based on the LAWST format where we meet to discuss the ins and outs of Exploratory Testing in order to challenge each other and increase our own understanding of the topic. SWET has many siblings around the world and the family of peer conferences on software testing keeps on growing which is a delightful thing to see! Peer conferences rock. There’s no better way to learn new things about your craft in my mind, than to present an experience report and have it picked apart and challenged by your peers.

Friday (Pre-conference)
Most people arrived on the evening before and spent a couple of hours together eating dinner and chatting over a few drinks. The venue had a lovely common room with a cozy fireplace and comfy chairs so, as usual at these events, several people stayed up chatting happily well into the night without a care.

Saturday (Day 1)
The conference started off with a moment of silence for our friend and testing peer Ola Hyltén who recently passed away in a tragic car accident. Having met Ola myself for the first time at SWET2, that felt like an appropriate way of opening the conference. Then after a round of check-ins the schedule proceeded with the first experience report.

First up was Anna Elmsjö who talked about making use of business and process models. Anna described her process of questioning the diagrams and adding questions and information to the model to keep track of things she wanted to test. Open season contained an interesting thread about requirements where someone stated that it sounded as if Anna’s testing could be seen as a way of adding or sneaking in new requirements, or that someone might feel that she was. A comment on that question pointed out in turn that asking questions about the product doesn’t mean requirements are being added, but that they are being discovered, which is an important distinction to keep in mind in my opinion.

Second presentation was from Maria Kedemo, who talked about what she called model-based exploratory interviewing for hiring testers. Maria works as a test manager and has been heavily involved in recruiting for her employer during the past year. When preparing for the process of hiring, Maria explained, she drew on her testing experiences to see if she could identify some of her habits and skills as a tester and apply them to interviewing, e.g. different ways of searching for and finding new information. My take-aways include some thoughts on how modeling what you already have can help you find out what you really need (not just you want, or think you want). Also, a reaffirmation of the importance of updating your models as your understanding of what you’re modeling increases, sort of like how you would (hopefully) update a plan when reality changes.

Last presentation of the day, Saam Koroorian talked about using the system map, which is a model of the system, to drive testing. He also described how his organization has moved from what he called an activity or artifact-driven kind of testing to more information-driven testing. I interpreted these labels more as descriptors of how the surrounding organization would view testing. Either it’s viewed as an activity that is supposed to provide arbitrary measurements based on artifacts (like test cases) to show some kind of (false) progress, i.e. bad testing, or it’s viewed as an activity that is expected to provide information, i.e. better testing (or simply “testing”).

Saam continued to talk about how his team had adopted James Bach’s low-tech testing dashboard concepts of assessing and showing coverage levels and testing effort of different areas which led to many new green cards (new discussion threads). Among them was a thread of mine that discussed the importance of taking the time dimension into account and how to visualize “freshness” and reliability of information as time passes (assuming the system changes over time). This is something I’ve recently discussed with some other colleagues to solve a similar problem at a client which I found very stimulating. Might turn that into a blog post of its own one day (when the solution is finished).

Saam also noted that as his organization was moving towards an agile transition at the time, sneaking in new thinking and ideas in the testing domain was easier than usual, since the organization was already primed for change in general. Interesting strategy. Whatever works. 🙂

Lightning Talks
Day 1 was concluded  with a 60-minute round of lightning talks, which based on the number of speakers meant that each person got 5 minutes to run their presentations (including questions). Lots of interesting topics in rapid progression,  like an example of how to use free tools to create cheap throw-away test scripts as an exploration aid (James Bach) or how to use the HTSM quality characteristics to discuss quality with customers and figure out their priorities (Sigge Birgisson). Erik Brickarp gave Lightning Talk on visualization that he’s now turned into a blog post over at his blog. My own Lightning Talk was about helping testers break stale mental models and to get out of creative ruts through mix-up testing activities (a.k.a cross-team testing). If I’m not mistaken, I think all participants gave a Lightning Talk if they weren’t already scheduled to give a presentation which was nice. That way everybody got to share at least one or two of their ideas and experiences.

In the evening, the group shared a rather fantastic “Black Rock” dinner after which the discussions continued well into the wee hours of the night, despite my best efforts to get to bed at a reasonable hour for once.

Sunday (Day 2)
After check-in on day 2, the first order of business was to continue through the stack of remaining threads from Saam’s talk that we didn’t have time to get to the day before. I think this is a pretty awesome part of this conference format. Discussions continue until the topic is exhausted, even if we have to continue the following day. There’s no escape. 😉

The first (and only, as it turned out) presentation of day 2 came from James Bach who told a story about how he had done exploratory modeling of a class 3 medical device through the use of its low level design specification to come up with a basis for his subsequent test design. During open season we also got into a lot more information about his overarching test strategy. It was a fascinating story that I won’t go into much detail on here, but you should ask him to tell it to you if you get a chance. You’ll get a lot of aha! moments. My biggest takeaway from that open season discussions was a reaffirmation of something I’ve known for quite some time, but haven’t been able to put into words quite so succinctly: “Formal testing that’s any good is always based on informal testing”. Also worth considering: Informal testing is based in play. As is learning.

Formal testing is like the opening night of a big show. It becomes a success because(/if) it’s been rehearsed. And informal testing provides that rehearsal. Skip rehearsing at your peril.

So how do you go from playing into making formal models? You practice! And according to James, a good way to practice is to start by drawing state models of various systems. Like for instance this über-awesome Flash game. When you’ve modeled the game, you can start to play around with it in order to start generating a rich set of test ideas. Asking “what if”-style questions like “What happens if I go from here to here?” or “I seem to be able to do this action over here, I wonder if I can do it over here as well?” and so on. What factors exist, what factors can exist, which factors matter?

I want to finish off with a final couple of quick take-aways from the weekend. First, a “test case” can be defined as an instance or variation of a test or test idea. By using that definition you’ll be able to encompass many or most of the varying things and containers that people call test cases. And finally, regarding requirements… Challenge the assumption that tests can be derived from the requirements. The tests aren’t in the requirements and thus can’t be derived from them. You can however, construct tests that are relevant in order to test the requirements and obtain information about the product, usually based on risk. While on the subject remember that, usually, requirements > requirements document.

SWET4

Thank you to all the participants at SWET4: Anna Elmsjö, Simon Morley, Tobbe Ryber, Oscar Cosmo, Erik Brickarp, James Bach, Johan Jonasson, Sigge Birgisson, Maria Kedemo, Rikard Edgren, Joakim Thorsten, Martin Jansson, Saam Koroorian, Sandra Camilovic and Henrik Emilsson.

That’s it. That’s all that happened. (No, not really, but I’ll have to save some things for later posts!)

EAST meetup #7

Last night, EAST (the local testing community in Linköping) had its 7th “official” meetup (not counting summer pub crawls and the improvised restaurant meetup earlier this fall). A whopping 15 people from opted to prolong their workday by a few hours and gather to talk about testing in inside Ericsson’s facilities in Mjärdevi (hosting this time, thanks to Erik Brickarp). Here’s a short account of what went down.

First presentation of the night was me talking about the past summer’s CAST conference and my experiences from that. The main point of the presentation was to give people who didn’t know about CAST before an idea of what makes CAST different from “other conferences” and why it might be worth considering attending from a professional development standpoint. CAST is the conference of the Association for Software Testing. A non-profit organization with a community made up lots of cool people and thinking testers. That alone usually makes the conference worth attending. But, naturally I’m a bit biased.

If you want to know more about CAST, you can find some general information on the AST web and CAST 2012 in particular has been blogged about by several people, including myself.

Second presentation was from Victoria Jonsson and Jakob Bernhard who gave their experience report from the course “The Whole Team Approach to Agile Testing” with Janet Gregory that they had attended a couple of months ago in Gothenburg.

There were a couple of broad topics covered. All had a hint of the agile testing school to them, but from the presentation and discussions that followed, I got the impression that the “rules” had been delivered as good rather than best practices, with a refreshingly familiar touch of “it depends”. A couple of the main topics (as I understood them) were:

  • Test automation is mandatory for agile development
    • Gives more time for testers to do deeper manual testing and focus on what they do best (explore).
    • Having releases often is not possible without an automated regression test suite.
    • Think of automated tests as living documentation.
  • Acceptance Testing could/should drive development
    • Helps formulating the “why”.
    • [Comment from the room]: Through discussion, it also helps with clarifying what we mean by e.g. “log in” in a requirement like “User should be able to log in”.
  • Push tests “lower” and “earlier”
    • Aim to support the development instead of breaking the product [at least early on, was my interpretation].
    • [Discussion in the room]: This doesn’t mean that critical thinking has to be turned off while supporting the team. Instead of breaking the product, transfer the critical thinking elsewhere e.g. the requirements/user stories and analyze critically, asking “what if” questions.
    • Unit tests should take care of task level testing, Acceptance tests handles story level testing and GUI-tests should live on a feature level. [Personally, and that was also the reaction of some people in the room, this sounds a bit simplified. Might not be meant to be taken literally.]

There was also a discussion about test driven development and some suggestions of good practices came up, like for instance how testers on agile teams should start a sprint by discussing test ideas with the programmer(s), outlining the initial test plan for them. That way, the programmer(s) can use those ideas, together with their own unit tests, as checks to drive their design and potentially prevent both low and high level bugs in the process. In effect, this might also help the tester receive “working software” that is able to withstand more sapient exploratory testing and the discussion process itself also helps to remove confusion and assumptions surrounding the requirements that might differ between team members. Yep, communication is good.

All in all, a very pleasant meetup. If you’re tester working in the region (or if you’re willing to travel) and want to join for the next meetup, drop me an e-mail or comment here on the blog and I’ll provide information and give you a heads up when the next date is scheduled.

Eulogy for Ola

This past Wednesday, our community was reached by the sad news that our friend Ola Hyltén had passed away. I was out of the country when I first heard the news, and though I’m back home now, I’m still having trouble coming to terms with the fact. I wasn’t planning on writing anything about this at first, and several others have already expressed their feelings better than I could hope to do in blogs, comments and tweets. But now I’m thinking that I might write a few short lines anyway, mostly for my own sake, in hope that it might provide some cathartic effect.

I didn’t know Ola up until a couple of years ago. I had heard of him before that, but it wasn’t until SWET2 that I got to meet him in person. I found him to be a very likable guy who got along well with everybody and who always seemed to be just a second away from saying something funny or burst out laughing himself. After SWET2, I kept running in to Ola every now and then, for instance at a local pub gathering for testers down in Malmö and later at SWET3. However, it was during our time together on the Let’s Test conference committee, leading up to Let’s Test 2012, that I really got to know him. Ola always seemed to be easygoing, even when things were going against him and it was easy, even for an introvert like myself, to slip into long and entertaining conversations with him about everything and nothing.

I considered Ola to be one of the more influential Swedish testers in recent years, and it’s an influence that we as a community will surely miss having around. We’ve lost a friend, a great conversationalist and a valued colleague and I know it will take a lot of time still before that fact truly sinks in for me.

My condolences goes out to Ola’s family and close friends as they go through this difficult time.

More eulogies for Ola can be found here, here, here and here.

I’m a sucker for analogies

I love analogies. I learn a lot from them and I use them a lot myself to teach others about different things. Sure, even a good analogy is not the same as evidence of something, and if  taken too far, analogies can probably do more harm than good (e.g. “The software industry is a lot like the manufacturing industry, because… <insert far fetched similarity of choice>”). However, I find that the main value of analogies is not that they teach us “truths”, but rather that they help us think about problems from different angles, or help illustrate thinking behind new ideas.

I came across such an analogy this morning in a mail list discussion about regression testing. One participant offered a new way of thinking about the perceived problem of keeping old regression tests updated, in this way: “Pause for a moment and ask… why should maintenance of old tests be happening at all? […] To put it another way, why ask old questions again? We don’t give spelling tests to college students […]”

I like that analogy – spelling tests to college students. If our software has matured past a certain point, then why should we go out of our way to keep checking that same old, unchanged functionality in the same way as we’ve done a hundred times before? Still, the point was not “stop asking old questions”, but rather an encouragement to examine our motivations and think about possible alternatives.

A reply in that same thread made a point that their regression tests were more like blood tests than like spelling tests. The analogy there: Just because a patient “passes” a blood test today, doesn’t mean it’s pointless for the physician to draw blood on the patient’s next visit. Even if the process of drawing blood is the same every time, the physician can choose to screen for a single problem, or multiple problems, based on symptoms or claims made by the patient. Sort of like how a tester can follow the same path through a program twice but vary the data.

So what does this teach us about testing? Again, analogies rarely teach us any hard truths, but they serve as useful stimuli and help us think from new angles. I use them as I use any other heuristic methods. So with this spelling test/blood test analogy in mind, I start to  think about the test ideas I have lined up for the coming few days at work. Are most of them going to be like spelling tests and if so, can I still make a good argument for why those would be the best use of my time? Or are there a few ideas in there that could work like blood tests? If so, what qualifies them as such and can I improve their screening capability even further in some way (e.g. vary the data)?

Like I said earlier, I came across this analogy just this morning, which means I’m probably not really done thinking about it myself yet, but I thought it worth sharing nonetheless. Much like cookies, sometimes a half-baked thought is even better than the real thing. Or at least better than no cookie at all. So here it is. And with that analogy, or maybe with this one below, I bid you a good day.

XKCD: Analogies

Report from CAST 2012

This year’s Conference of the Association for Software Testing (CAST) is now in the books. I’m returning home with a head full of semi-digested thoughts and impressions (as well as 273 photos in my camera and an undisclosed number of tax free items in my bag) and will briefly try to summarize a few of them here while I try to get back on Europe time.

The Trip
I’m writing this while on the train heading home on the last leg of this trip. Wow, San Jose sure is far away. Including all the trains, flights and layovers… I’d say it’s taken about 24 hours door-to-door, in each direction. That should tell you a bit about how far me and others are willing to go for solid discussions about testing (and I know there are people with even worse itineraries than that).

The Venue
I arrived at the venue a little over a day in advance in order to have some time to fight off that nasty 9 hour jet lag. Checked in to my room. Then immediately switched rooms since the previous guest had forgotten to bring his stuff out, though the hotel’s computer said that the room had been vacated. Still got my bug magnetism in working order apparently.

CAST was held at the Holiday Inn San Jose Airport this year. The place was nice enough. Nothing spectacular, but it did the job. The hotel food was decent and the coffee sucked as badly as ever. Which I expected it would, but… there were no coffee shops within a couple of miles as far as I could tell(!) I’m strongly considering bringing my own java the next time I leave Europe. It’s either that or I’ll have to start sleeping more, which just doesn’t work for me at any testing event.

The Program
I’m not going to comment much on the program itself since I helped put it together. Just wouldn’t make sense since I’d be too biased. I’m sure there will be a number of other CAST blog posts out there soon that will go more in depth (check through my blogroll down in the right hand sidebar for instance). I’ll just say that I got to attend a couple of cool talks on the first day of the conference. One of them was with Paul Holland who talked about his experiences with interviewing testers and the methods he’s been using successfully for the past 100+ interviews. Something I’m very interested in myself. I actually enjoy interviews, from both sides of the table.

The second day I got “stuck” (voluntarily) in a breakout room right after the first morning session. A breakout room is something we use at CAST when a discussion after a session takes too long and there are other speakers who need the room. Rather than stopping a good discussion, we move it to a different room and keep at it as long as it makes sense and the participants have the energy for it. Anyway, this particular breakout featured myself and two or three others who wanted to continue discussing with Cem Kaner after his presentation on Software Metrics. We kept at it up until lunch and after that I was kind of spent, so I opted to “help out” (a.k.a take up space) behind the registration desk for the rest of the day. Which was fun too!

The third day was made up of a number of full day tutorials. I didn’t participate in any of them though, so again you’ll have to check other blogs (or #CAST2012 on Twitter) to catch impressions from them.

Facilitation
CAST makes use of facilitated discussions after each session or keynote. At least one third of the allotted time for any speaker is reserved for discussions. This year I volunteered to facilitate a couple of sessions. I ended up facilitating a few talks in the Emerging Topics track (short talks) as well as a double session workshop. It was interesting, but I think I need to sign-up for more actual sessions next year to really get a good feel for it (Emerging Topics didn’t have a big audience when I was there and the workshop didn’t need much in way of facilitation).

San Jose / San Francisco
We also had time to see a little bit of both San Jose and San Francisco on this trip, which was nice. I only got to downtown San Jose on the Sunday leading up to the conference, so naturally things were a bit quiet. I guess it’s not like that every day of the week(?)

San Francisco turned out to be an interesting place with sharp contrasts. The Mission district, Market Square and Fisherman’s Wharf all had their own personalities and some good and bad things to them. Anyway, good food, nice drinks and good company together with a few other testers can make any place a nice place.

Summary
As with CAST every year, it’s the company of thoughtful, engaged testers that makes CAST great. If you treat it like any other conference and just go to the sessions and then go back to your room without engaging with the rest of the crowd at any point during the day (or night), then I’m afraid you’ll miss out on much of the Good Stuff. Instead, partake in hallway hangouts, late night testing games, informal discussions off in a corner, test your skill in the TestLab with James Lyndsay or join one of the AST’s SIG meetings. That’s when the real fun usually comes out for me. And this year was no exception.

Going to CAST

Next week I’ll be at the Conference of the Association for Software Testing (CAST) in San Jose, CA. The first time I attended CAST in 2009, it quickly became my yearly top priority among conferences to attend. This is a “CONFERence” type conference (high emphasis on community and discussions) which usually produces a lot of blog worthy material for its attendees. I will try to write a couple of brief blog entries while at the conference, but if you want to find out what’s being discussed in “real time”, then tune in to #CAST2012 on Twitter, or check out the live webCAST.

If you’re a regular reader of this (little over a month old) blog, then you know that CAST was one of the inspirations behind the recent Let’s Test conference. CAST has always been a great experience for me and this year’s CAST will be my 4th. So far I have gone home every time with my head filled with new ideas and interesting discussions lingering in my head, waiting to be processed over the following few weeks, and I think this year will be no exception.

This year’s CAST will be the first where I’m taking part in the program committee (together with Anne-Marie Charrett, Sherry Heinze and program chair Fiona Charles) and so I’ve been reading through and evaluating a wide range of great proposals for the workshops and sessions that will make up the first two days of the conference, trying to help put together a really exiting program for this year’s theme, “The Thinking Tester”.

I’ll also be facilitating a few of the workshop and session discussions this year, which will be interesting. In Sweden we’re used to going to conferences to “learn” from the speaker and everybody take turns to ask their questions in a polite (read: timidly) and orderly fashion , much like we do when queuing at the supermarket or movie theater, Swedish style. At CAST on the other hand, it’s not uncommon for the speaker’s message to be challenged and/or questioned thoroughly. Needless to say, to get a discussion of that kind to flow effectively without derailing, good facilitation is key. Facilitation also enables other things, like making sure that more than one or two people get to talk during the Q&A or that discussions stay on topic. I like both that kind of attitude and format, and although I’ve already taken the stage as a speaker at CAST in the past, this will be my first time facilitating “over on that side of the pond”. So yeah, it will be an interesting experience for me for sure.

Looking forward to going there, meeting old friends, listening to interesting talks, facilitating discussions, blogging about it… Looking forward to it all!

Finally, those with a keen eye might have noticed that the headline of this blog has changed recently. The reason is simple… When I resurrected this blog last month, I just put the first thing that came to mind as the headline. Turns out, the first thing that came to mind was the exact same headline as Shmuel Gershon uses on his (well established and well worth reading) testing blog. We can’t have that. Huib Schoots was kind enough to point this out in his most recent blog post, titled “15 test bloggers you haven’t heard about, but you should…“, where incidentally, I’m one of the 15. Most of the other blogs on that list are real gems, by the way. One or two I haven’t heard about myself, so I’ll check them out this summer for sure.

Re: Adaptability vs Context-Driven

A couple of days ago, Huib Schoots published a very interesting blog post titled “Adaptability vs Context-Driven“, as part of an ongoing discussion between himself and Rik Marselis. This blog post represents my initial reaction to that discussion.

The long and short of it all seems to be about whether using a test framework, like TMap, combined with being adaptable and perceptive, is similar to (or the same as) being context-driven?

To me the answer is… no. In fact, I believe TMap and the context-driven school of thought live on opposite ends of the spectrum.

Context-driven testers choose every single aspect of how to conduct their testing by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. It starts with the context, not a toolbox or a ready-made, prescriptive process.

TMap and other factory methods seem to start with the toolbox and then proceed to remove whatever parts of the toolbox that doesn’t fit the context (“picking the cherries” as it’s referred to in Huib and Rik’s exchange). At least that’s how I’ve seen it used when it’s been used relatively well. More often than not however, I’ve worked with (well-intentioned) factory testers who refused to remove what didn’t fit the context, and instead advocated changing the context to fit the standardized process or templates. So, context-imperial or mildly context-aware at best. Context-driven? Not in the slightest.

When I’m faced with any testing problem, I prefer to start with the context and then build my strategy from the ground up; testing the strategy as I’m building it while making as few assumptions as possible about what will solve the problem beforehand. I value strategizing incrementally together with stakeholders over drawing up extensive, fragile test plans by using prescriptive templates that limit everybody’s thinking.

I’m not saying that “cherries” can’t be found in almost any test framework. But why would I limit myself to looking for cherries in only a single cherry tree, when there’s a whole garden of fruit trees available all around us? Or is that forbidden fruit…? (Yes, I’m looking at you, ISO/IEC 29119.)

Well, now that’s surely a can of worms for another time. To be continued.

If you haven’t already read Huib’s post that I referred to in the beginning, then I suggest you do that now.

Thank you Huib and Rik for starting this discussion and for making it public. Testers need to engage in more honest exchanges like this.

Let’s Test – in retrospect

What just happened? Was I just part of the first-ever European conference on context-driven software testing? It feels like it was only yesterday that was still thinking “this will never happen”, but it happened, and it’s already been over a month now since it did. So maybe it’s time for a quick (sort of) retrospective? Let’s see, where do I begin…?

Almost a year ago, I did something I rarely do. I made a promise. The reason I rarely make promises is because I’m lousy at following a plan and with many if not most promises, there’s planning involved… So making a promise would force me to both make a plan and then follow it. Impossible.

And yet, almost a year ago now, I found myself at the CAST conference in Seattle, standing in front of 200+ people (and another couple of hundred people listening in via webcast I’ve been told) and telling the audience that me and some other people from Sweden were going to put on a conference on context-driven testing in 2012 and that it would be just like CAST, only in Europe. And of course we had it all planned out and ready to be launched! Right…? Well… not… really…

At that point we didn’t have a date set, no venue contract in place, no program that we could market, no funding, no facilitators – heck, we didn’t even really have a proper project team. The people who had been discussing this up until now had only started talking about organizing a conference at the 2nd SWET workshop on exploratory testing in Sweden a couple of months earlier. In my mind, it was all still only on a “Yeah, that would be a neat thing to pull off! We should do that!” level of planning or committment from anyone. At least as far as I was concerned. The other guys might tell you that they had made up their minds long before this, but I don’t think I had.

Anyway, since I was elected (sort of) to go ahead and announce our “plan” (sort of), I guess this is the point were I made up my mind to be a part of what we later named “Let’s Test – The Context-Driven Way” and over the next couple of months we actually got a project team together and became more or less ready to take on what we had already promised (sort of) to do.

Fast forward a couple of months more. So now we have that committed team of 5 people in place, working from 5 different locations around the country (distributed teams, yay!). We have an awesome website, a Twitter account, a shared project Dropbox and some other boring back office stuff in place. The team members are all testers by trade, ready to crete a conference that is truely “by testers, for testers”. Done. What more do we need? Turns out, a conference program is pretty high up on the “must have” list for a conference. Yeah, we should get on that…

I think that this was the point where I started to realize just how much support this idea had out there in the context-driven testing community already. Scott Barber, Michael Bolton and Rob Sabourin were three of our earliest “big name” supporters who had heard our annoucement at CAST, and many testers from the different European testing communities were also cheering for the idea early on, offering support. A bunch of fabulous tutorial teachers and many fantastic testing thinkers and speakers from (literally) all over the world, who we never dreamed would come all the way to Sweden, also accepted our invitations early on. Our call for papers (that I at first feared wouldn’t get many submissions since we were a first-time conference) also rendered a superb yield of excellent proposals. So much so that it was almost impossible to only pick a limited number to put on the program.

So while I can say in retrospect that creating a conference program is no small task, it is a heck of a lot easier when you get as awesome a repsonse and support from the community as we’ve gotten throughout this past year. It did not go unnoticed folks!

After we got the program in place, I was still a bit nervous about the venue and residential conference format. Would people actually like to come to this relatively remote venue and stay there for three days and nights, while basically doing nothing else but talk about testing, or would they become bored and long for a night on the town? I had to remind myself of the reasons we decided to go down this route in the first place: CAST and SWET.

CAST is the annual “Conference of the Association for Sotware Testing” which uses a facilitated discussion format developed through the LAWST workshops. People who come to CAST usually leave saying it’s been one of their best conference experiences ever, in large parts due to (I believe) this format with facilitated discussions after each and every presentation. We borrowed this format for Let’s Test, and with the help of the Association for Software Testing (AST) we were able to bring in CAST head facilitator Paul Holland to offer facilitaiton training to a bunch of brilliant volunteers. Awesome.

SWET is the “Swedish Workshop on Exploratory Testing”, which is a small-scale peer workshop that also uses the LAWST style discussion format. But what makes this sort of gathering different from most regular conferences is that the people who come to the workshop all stay at the same location as the workshop is being held, for one or two consecutive days and nights. So after the workshop has concluded for the day, discussions still don’t stop. People at SWET stay up late and continue to share and debate ideas well into the night, at times using the sunrise as their only cue to get to bed. I believe one of the main reasons for this is… because they can. They don’t have to catch a bus or a cab to go back to their hotel(s) and when given the opportunity to stay up late and talk shop with other people who are as turned on by software testing as they are, they take it. We wanted to see this made possible for about ten times as many people as we usually see at SWET as well. Hence the residential format and extensive evening program at Let’s Test, which I believe is a fairly unusual if not unique format for a conference of this size. At least in our neck of the woods.

In the end, I personally think we were able to offer a nice blend of these two conference models that had inspired us. People weren’t forced to enter into discussions after sessions, but they were always able and encouraged to participate, and in a structured manner (great job all facilitators!). Also, people could choose to go to bed early and recharge their batteries after a long day of conferencing, or they could opt-in for either high energy test lab activities, or a more mellow and laid back art tour around the venue campus (to name but a couple of the well attended evening activities) before heading for the bar. I think I managed to get to bed at around 02.00 AM each night, but I know that some folks stayed up talking for a couple of hours beyond that each night too.

Wrapping up this little retrospective, I’d like to say thank you to our sponsors who, among other things, helped make the evening events such a well appreciated part of the conference experience and who all really engaged actively in the conference, which was something we as organizers really appreciated. Finally, a special shout out to the very professional Runö venue crew and kitchen staff who readily helped us out whenever we needed it. You made the execution of this event a total joy.

I’m very happy about how Let’s Test turned out. It exceeded my own expectations for sure. Judging by the feedback we saw on Twitter during the event, and in the blogosphere afterwards, I’d say it looks like most who attended were pretty ok with the experience as well. Check out the blog links we’ve gathered on the Let’s Test 2012 Recap page and judge for yourselves. Seriously, it’s been extremely rewarding to read through all these blog posts. Thank you for that.

Plans are already well underway for next year’s conference. We’re delighted that both James Bach and Johanna Rothman have signed on to be two of our keynote speakers and we’ll announce a call for proposals sometime after the summer for sure and I encourage all of you who sent something in last year to do so again. Oh, and you can sign up right now for Let’s Test 2013 and catch the advantageous first responder rate. A bunch of people already have, so you’ll be in good company.

One final thing… We know a good deal about what people liked at Let’s Test 2012, but no doubt there are also a few things that we can and should improve. Let us know.

It’s been a pleasure. See you all there next year I hope!