Category Archives: Context-Driven Testing

Report from CAST 2012

This year’s Conference of the Association for Software Testing (CAST) is now in the books. I’m returning home with a head full of semi-digested thoughts and impressions (as well as 273 photos in my camera and an undisclosed number of tax free items in my bag) and will briefly try to summarize a few of them here while I try to get back on Europe time.

The Trip
I’m writing this while on the train heading home on the last leg of this trip. Wow, San Jose sure is far away. Including all the trains, flights and layovers… I’d say it’s taken about 24 hours door-to-door, in each direction. That should tell you a bit about how far me and others are willing to go for solid discussions about testing (and I know there are people with even worse itineraries than that).

The Venue
I arrived at the venue a little over a day in advance in order to have some time to fight off that nasty 9 hour jet lag. Checked in to my room. Then immediately switched rooms since the previous guest had forgotten to bring his stuff out, though the hotel’s computer said that the room had been vacated. Still got my bug magnetism in working order apparently.

CAST was held at the Holiday Inn San Jose Airport this year. The place was nice enough. Nothing spectacular, but it did the job. The hotel food was decent and the coffee sucked as badly as ever. Which I expected it would, but… there were no coffee shops within a couple of miles as far as I could tell(!) I’m strongly considering bringing my own java the next time I leave Europe. It’s either that or I’ll have to start sleeping more, which just doesn’t work for me at any testing event.

The Program
I’m not going to comment much on the program itself since I helped put it together. Just wouldn’t make sense since I’d be too biased. I’m sure there will be a number of other CAST blog posts out there soon that will go more in depth (check through my blogroll down in the right hand sidebar for instance). I’ll just say that I got to attend a couple of cool talks on the first day of the conference. One of them was with Paul Holland who talked about his experiences with interviewing testers and the methods he’s been using successfully for the past 100+ interviews. Something I’m very interested in myself. I actually enjoy interviews, from both sides of the table.

The second day I got “stuck” (voluntarily) in a breakout room right after the first morning session. A breakout room is something we use at CAST when a discussion after a session takes too long and there are other speakers who need the room. Rather than stopping a good discussion, we move it to a different room and keep at it as long as it makes sense and the participants have the energy for it. Anyway, this particular breakout featured myself and two or three others who wanted to continue discussing with Cem Kaner after his presentation on Software Metrics. We kept at it up until lunch and after that I was kind of spent, so I opted to “help out” (a.k.a take up space) behind the registration desk for the rest of the day. Which was fun too!

The third day was made up of a number of full day tutorials. I didn’t participate in any of them though, so again you’ll have to check other blogs (or #CAST2012 on Twitter) to catch impressions from them.

Facilitation
CAST makes use of facilitated discussions after each session or keynote. At least one third of the allotted time for any speaker is reserved for discussions. This year I volunteered to facilitate a couple of sessions. I ended up facilitating a few talks in the Emerging Topics track (short talks) as well as a double session workshop. It was interesting, but I think I need to sign-up for more actual sessions next year to really get a good feel for it (Emerging Topics didn’t have a big audience when I was there and the workshop didn’t need much in way of facilitation).

San Jose / San Francisco
We also had time to see a little bit of both San Jose and San Francisco on this trip, which was nice. I only got to downtown San Jose on the Sunday leading up to the conference, so naturally things were a bit quiet. I guess it’s not like that every day of the week(?)

San Francisco turned out to be an interesting place with sharp contrasts. The Mission district, Market Square and Fisherman’s Wharf all had their own personalities and some good and bad things to them. Anyway, good food, nice drinks and good company together with a few other testers can make any place a nice place.

Summary
As with CAST every year, it’s the company of thoughtful, engaged testers that makes CAST great. If you treat it like any other conference and just go to the sessions and then go back to your room without engaging with the rest of the crowd at any point during the day (or night), then I’m afraid you’ll miss out on much of the Good Stuff. Instead, partake in hallway hangouts, late night testing games, informal discussions off in a corner, test your skill in the TestLab with James Lyndsay or join one of the AST’s SIG meetings. That’s when the real fun usually comes out for me. And this year was no exception.

Re: Adaptability vs Context-Driven

A couple of days ago, Huib Schoots published a very interesting blog post titled “Adaptability vs Context-Driven“, as part of an ongoing discussion between himself and Rik Marselis. This blog post represents my initial reaction to that discussion.

The long and short of it all seems to be about whether using a test framework, like TMap, combined with being adaptable and perceptive, is similar to (or the same as) being context-driven?

To me the answer is… no. In fact, I believe TMap and the context-driven school of thought live on opposite ends of the spectrum.

Context-driven testers choose every single aspect of how to conduct their testing by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. It starts with the context, not a toolbox or a ready-made, prescriptive process.

TMap and other factory methods seem to start with the toolbox and then proceed to remove whatever parts of the toolbox that doesn’t fit the context (“picking the cherries” as it’s referred to in Huib and Rik’s exchange). At least that’s how I’ve seen it used when it’s been used relatively well. More often than not however, I’ve worked with (well-intentioned) factory testers who refused to remove what didn’t fit the context, and instead advocated changing the context to fit the standardized process or templates. So, context-imperial or mildly context-aware at best. Context-driven? Not in the slightest.

When I’m faced with any testing problem, I prefer to start with the context and then build my strategy from the ground up; testing the strategy as I’m building it while making as few assumptions as possible about what will solve the problem beforehand. I value strategizing incrementally together with stakeholders over drawing up extensive, fragile test plans by using prescriptive templates that limit everybody’s thinking.

I’m not saying that “cherries” can’t be found in almost any test framework. But why would I limit myself to looking for cherries in only a single cherry tree, when there’s a whole garden of fruit trees available all around us? Or is that forbidden fruit…? (Yes, I’m looking at you, ISO/IEC 29119.)

Well, now that’s surely a can of worms for another time. To be continued.

If you haven’t already read Huib’s post that I referred to in the beginning, then I suggest you do that now.

Thank you Huib and Rik for starting this discussion and for making it public. Testers need to engage in more honest exchanges like this.

Let’s Test – in retrospect

What just happened? Was I just part of the first-ever European conference on context-driven software testing? It feels like it was only yesterday that was still thinking “this will never happen”, but it happened, and it’s already been over a month now since it did. So maybe it’s time for a quick (sort of) retrospective? Let’s see, where do I begin…?

Almost a year ago, I did something I rarely do. I made a promise. The reason I rarely make promises is because I’m lousy at following a plan and with many if not most promises, there’s planning involved… So making a promise would force me to both make a plan and then follow it. Impossible.

And yet, almost a year ago now, I found myself at the CAST conference in Seattle, standing in front of 200+ people (and another couple of hundred people listening in via webcast I’ve been told) and telling the audience that me and some other people from Sweden were going to put on a conference on context-driven testing in 2012 and that it would be just like CAST, only in Europe. And of course we had it all planned out and ready to be launched! Right…? Well… not… really…

At that point we didn’t have a date set, no venue contract in place, no program that we could market, no funding, no facilitators – heck, we didn’t even really have a proper project team. The people who had been discussing this up until now had only started talking about organizing a conference at the 2nd SWET workshop on exploratory testing in Sweden a couple of months earlier. In my mind, it was all still only on a “Yeah, that would be a neat thing to pull off! We should do that!” level of planning or committment from anyone. At least as far as I was concerned. The other guys might tell you that they had made up their minds long before this, but I don’t think I had.

Anyway, since I was elected (sort of) to go ahead and announce our “plan” (sort of), I guess this is the point were I made up my mind to be a part of what we later named “Let’s Test – The Context-Driven Way” and over the next couple of months we actually got a project team together and became more or less ready to take on what we had already promised (sort of) to do.

Fast forward a couple of months more. So now we have that committed team of 5 people in place, working from 5 different locations around the country (distributed teams, yay!). We have an awesome website, a Twitter account, a shared project Dropbox and some other boring back office stuff in place. The team members are all testers by trade, ready to crete a conference that is truely “by testers, for testers”. Done. What more do we need? Turns out, a conference program is pretty high up on the “must have” list for a conference. Yeah, we should get on that…

I think that this was the point where I started to realize just how much support this idea had out there in the context-driven testing community already. Scott Barber, Michael Bolton and Rob Sabourin were three of our earliest “big name” supporters who had heard our annoucement at CAST, and many testers from the different European testing communities were also cheering for the idea early on, offering support. A bunch of fabulous tutorial teachers and many fantastic testing thinkers and speakers from (literally) all over the world, who we never dreamed would come all the way to Sweden, also accepted our invitations early on. Our call for papers (that I at first feared wouldn’t get many submissions since we were a first-time conference) also rendered a superb yield of excellent proposals. So much so that it was almost impossible to only pick a limited number to put on the program.

So while I can say in retrospect that creating a conference program is no small task, it is a heck of a lot easier when you get as awesome a repsonse and support from the community as we’ve gotten throughout this past year. It did not go unnoticed folks!

After we got the program in place, I was still a bit nervous about the venue and residential conference format. Would people actually like to come to this relatively remote venue and stay there for three days and nights, while basically doing nothing else but talk about testing, or would they become bored and long for a night on the town? I had to remind myself of the reasons we decided to go down this route in the first place: CAST and SWET.

CAST is the annual “Conference of the Association for Sotware Testing” which uses a facilitated discussion format developed through the LAWST workshops. People who come to CAST usually leave saying it’s been one of their best conference experiences ever, in large parts due to (I believe) this format with facilitated discussions after each and every presentation. We borrowed this format for Let’s Test, and with the help of the Association for Software Testing (AST) we were able to bring in CAST head facilitator Paul Holland to offer facilitaiton training to a bunch of brilliant volunteers. Awesome.

SWET is the “Swedish Workshop on Exploratory Testing”, which is a small-scale peer workshop that also uses the LAWST style discussion format. But what makes this sort of gathering different from most regular conferences is that the people who come to the workshop all stay at the same location as the workshop is being held, for one or two consecutive days and nights. So after the workshop has concluded for the day, discussions still don’t stop. People at SWET stay up late and continue to share and debate ideas well into the night, at times using the sunrise as their only cue to get to bed. I believe one of the main reasons for this is… because they can. They don’t have to catch a bus or a cab to go back to their hotel(s) and when given the opportunity to stay up late and talk shop with other people who are as turned on by software testing as they are, they take it. We wanted to see this made possible for about ten times as many people as we usually see at SWET as well. Hence the residential format and extensive evening program at Let’s Test, which I believe is a fairly unusual if not unique format for a conference of this size. At least in our neck of the woods.

In the end, I personally think we were able to offer a nice blend of these two conference models that had inspired us. People weren’t forced to enter into discussions after sessions, but they were always able and encouraged to participate, and in a structured manner (great job all facilitators!). Also, people could choose to go to bed early and recharge their batteries after a long day of conferencing, or they could opt-in for either high energy test lab activities, or a more mellow and laid back art tour around the venue campus (to name but a couple of the well attended evening activities) before heading for the bar. I think I managed to get to bed at around 02.00 AM each night, but I know that some folks stayed up talking for a couple of hours beyond that each night too.

Wrapping up this little retrospective, I’d like to say thank you to our sponsors who, among other things, helped make the evening events such a well appreciated part of the conference experience and who all really engaged actively in the conference, which was something we as organizers really appreciated. Finally, a special shout out to the very professional Runö venue crew and kitchen staff who readily helped us out whenever we needed it. You made the execution of this event a total joy.

I’m very happy about how Let’s Test turned out. It exceeded my own expectations for sure. Judging by the feedback we saw on Twitter during the event, and in the blogosphere afterwards, I’d say it looks like most who attended were pretty ok with the experience as well. Check out the blog links we’ve gathered on the Let’s Test 2012 Recap page and judge for yourselves. Seriously, it’s been extremely rewarding to read through all these blog posts. Thank you for that.

Plans are already well underway for next year’s conference. We’re delighted that both James Bach and Johanna Rothman have signed on to be two of our keynote speakers and we’ll announce a call for proposals sometime after the summer for sure and I encourage all of you who sent something in last year to do so again. Oh, and you can sign up right now for Let’s Test 2013 and catch the advantageous first responder rate. A bunch of people already have, so you’ll be in good company.

One final thing… We know a good deal about what people liked at Let’s Test 2012, but no doubt there are also a few things that we can and should improve. Let us know.

It’s been a pleasure. See you all there next year I hope!