Category Archives: Conferences

Peer conference awesomeness at SWETish

I spent this past weekend at a peer conference. This one marks my 5th peer conference, but it’s been a long while since I was at my 4th. In fact, it’s been four years since SWET4. Peer conferences are awesome though, as they let the participants really go deep and have thorough and meaningful conversations over one or more days in a small enough group that makes such discussions possible.

Ever since I moved to Linköping a few years ago, I had promised to do my part in organizing a local peer conference for the testing community in and around Linköping and this weekend we finally got that project off the ground! We decided to call this particular conference SWETish instead of making it another increment of the regular SWET. The reasons being that we wanted to keep participation first and foremost from within the local communities and the regular SWET conferences invite people from all over country, and also because we wanted to keep the theme broad and inclusive whereas SWET has already been through a fair number of iterations (SWET7 being the latest one) and we sort of didn’t want to break their set of more specific topics in exchange for a general one that has already been covered. Maybe next time we’ll put on “SWET8” though, if nobody in the Meetups.com group beats us to it (hint hint, wink wink, nudge nudge).

So, sort of but not quite a SWET conference i.e. SWETish (with its eponymous Twitter hashtag #SWETish).

The whole thing took place at the Vadstena Abbey Hotel, which is made up of a beautiful set of buildings, some dating back to the 12th, 13th and 14th century. From an organizer standpoint, I can certainly recommend this venue. Nice staff, cozy environment and above average food. And a nice historic atmosphere too of course. (Click the link in the tweet below for a couple of snapshots.)

When I sent out the initial invitations to this peer conference, I had my mind set on getting a total of 15 participants, as that seemed to be a good number of people to ensure that all speakers get plentiful of questions and that there would be a good mix of experiences and viewpoints, while at the same time not being too many people so that everybody gets to participate thoroughly and nobody is forced to sit quiet for too long. However, because a few people who had initially signed up couldn’t make it there in the end, we turned into a group of “only” 10 people. Turns out that’s an excellent number! Most if not all of us there agreed that the low number of participants helped create an environment where everybody got relaxed with each other really quickly which in turn helped when discussions and questions got more critical or pointed, without destroying the mood or productivity of those conversations.

Another pleasant surprise was that we only got through (almost) three presentations + Open Season (facilitated Q&A) during the conference (1,5 days). If memory serves, the average at my past peer conferences is four and sometimes we even start a fifth presentation and Q&A before time runs out. What I liked about us only getting through three is that that is a testament to how talkative and inquisitive the group was, even though 5 out of 10 participants were at their first ever peer conference! I facilitated the first presentation myself and so I can tell you that in that session alone we had 11 unique discussion threads (green cards) and 48 follow-up questions (yellow cards), plus quite a few legit red cards. So for those of you familiar with the k-cards facilitation system, you can tell that this wasn’t a quiet group who only wanted to listen to others speak. Which is great, because that’s the very thing that makes peer conferences so fantastically rewarding.


Apart from the facilitated LAWST-style sessions, we also spent 1 hour on Lightning Talks, to make sure that everyone got to have a few minutes of “stage time” to present something of their own choosing.

The evening was spent chatting around the dinner table, in the SPA and in smaller groups throughout the venue until well past midnight. And even though we’d spent a full day talking about testing, most of the conversations were still about testing! How awesome is that?

If you want to read more about what was actually said during the conference, I suggest you check out the Twitter hashtag feed, or read Erik Brickarp’s report that goes more into the content side of things. This blog post is/was more about evangelizing about the concept itself and provide some reflections from an organizer perspective. Maybe I should have mentioned that at the start? Oops.

A peer conference is made possible by the active participation of each and every member of the conference, and as such, credit for all resulting material, including this blog post, goes to the entire group. Namely, and in alphabetical order:

  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson

Thank you to all the participants, including the few of you who wanted to be there but couldn’t for reasons outside of your control. Next time! And thank you to my partners in crime in the organizing committee: Erik Brickarp, Anna Elmsjö and Björn Kinell.

There! You’ve now reached the end of my triennial blog post. See you in another three years! Actually, hopefully I’ll see you much sooner. The powerful dip in my blogging frequency has partly been due to the continuous deployment of new family members in recent years, which has forced me to cut back on more than one extracurricular activity.

Post below in the comments section if you have comments or questions about peer conferences, or want some help organizing one. I’d be happy to point you in the right direction!

Report from EuroSTAR 2013

I had the opportunity to speak at EuroSTAR this year, which made the decision to go a bit easier than it normally is. After all, EuroSTAR is a pretty pricey party to attend compared to many other conferences, such as e.g. Øredev which ran almost in parallel with EuroSTAR this year.

Anyway, this is a brief report from the conference with some of my personal take-aways, impressions and opinions about the whole thing.

People

First of all, to me, conferences is about the people you meet there. Sure, it’s good if there’s an engaging program and properly engaged speakers, but my main take-aways are usually from the hallway hangouts or late night discussions with whoever happens to be up for a chat. This year, I think the social aspect at EuroSTAR was great. I’ve been to EuroSTAR twice before, in 2008 and 2009, but this was the first one where I didn’t think that the size of the conference got in the way of meeting new and old friends. It also made a bit proud to see that the actual discussions and open seasons seemed pretty much dominated by the people in my community this year. This I think has to do with the way we normally interact with each other, and not because of any bias in the program. On the contrary, I think the program committee had put together a very well-balanced program with a lot of different view and testing philosophies being represented.

Tutorials

The first day and a half at EuroSTAR were devoted to tutorials. I rarely attend tutorials, unless I know they will be highly experiential, but this year I opted for one with relevance to my current testing field, medical software, namely “Questioning Auditors Questioning Testing, Or How To Win Friends And Influence Auditors” with James Christie. My main take-aways were not about how to relate to auditors though, but rather how to think about risk. James pointed out that a lot of times, we use variations of this traditional model to assess risk:

Risk Matrix

The problem with that model is that it scores high impact/low probability risks and low impact/high probability risks the same. Sure, if something is likely to happen we’d probably want to take care of that risk, even if it has only “low” impact. But is that really as important as fixing something that would be catastrophic but has only a small risk probability of happening? Sometimes yes, sometimes no, right? Either way, the model is too simplistic. The problem lies in our (in)ability to perceive and assess risk. Something I think is illustrated quite nicely in the following table.


O’Riordan, T, and Cox, P. 2001. Science, Risk, Uncertainty and Precaution. University of Cambridge.

This is an area I feel I want to dig deeper into. If you have any tips on reading material, please share in the comments.

By the way, James Christie also has a blog that I’ve started reading myself quite recently. His latest blog post is a real nugget for sure: Testing standards? Can we do better?

Keynotes & Sessions

New for this year of EuroSTAR is that the conference chair (Michael Bolton) had pushed for the use of K-cards and facilitated discussions after each talk and keynote. 30 minutes talks, 15 minutes of open season Q&A. Nice! I think that’s a very important improvement for EuroSTAR (though full hour slots would be even better). I mean, if you’re not given the opportunity to challenge a speaker on what he/she is saying, then what’s the point? Argument is a very important tool if we want to move our field forward, and it’s so rare that we in the global testing community get to argue face to face. We need facilitated discussions at every conference, not just a few. I’m glad to see that EuroSTAR is adopting what started at the LAWST peer workshops, and I do hope they stick with it!

All in all, I think the best sessions out of those I attended were:

Laurent Bossavit, who made a strong case against accepting unfounded claims. He did for instance bring up the age old “truth” about how fixing a bug becomes exponentially more expensive as it escapes from the requirements phase into the design phase (and so on) of a project. Turns out, the evidence for that truth is fairly poor, and only applies to certain types of bugs.

Keith Klain, who talked about overcoming organizational biases. His 5 point to follow when attempting to change company culture: 1. Determine your value system. 2. Define principles underpinned by your values. 3. Create objectives aligned to your business. 4. Be continually self-reflective. 5. Do not accept mediocrity. Changing culture is hard, but you might want (or need) to do it anyway. If you do, keep in mind point number 6: Manage your own expectations.

Ian Rowland, who gave a very entertaining talk about the power of IT, “Impossible Thinking”. Seemingly similar to lateral thinking (Edward DeBono), Impossible Thinking challenges you to not stop thinking about something just because it appears impossible, but rather move past that limitation and think about how the impossible might become possible. The thinking style can also be used to provoke creative thinking or new solutions, like how thinking about a phone that can only call 5 different phone numbers (a ridiculous idea at first glance) provoked the creation of a mobile subscription plan that let you call 5 friends free of charge. An idea that allegedly boosted sales for that particular carrier in a way that left competitors playing catch-up for months.

Rob Lambert, gave an experience report where he described in detail his company’s journey moving from releasing only a couple of times per year, down to releasing once per week. It was a very compelling story, but unfortunately I find myself currently working in a very different context. True experience reports are always a treat though.

Then of course I had my own presentation: Test Strategy – Why Should You Care?, where I tried to expand a bit on four main points: 1. Why most strategies I’ve seen are terrible and not helpful. 2. A model for thinking about strategies in a way that they might become helpful. 3. Characteristics of good strategy. 4. Arguments why you should care about good strategy. All in all, apart from maybe trying to pack too much into 30 minutes, I think it went ok. The room was packed too, which was nice.

I did sample quite a number of other sessions too, but it’s difficult to sum them all up with any sort of brevity so I won’t even try. Instead, I’ll provide a few quote from the talks I found the most rewarding:

“If it hurts, keep doing it.” – Rob Lambert (Learning/change heuristic)

Condense all the risks of the corporation into a single metric.” – Rick Buy, Enron (anti-pattern)

“Reality isn’t binary. We don’t know everything in advance. Observe, without a hypothesis to nullify.” – Rikard Edgren

“The questions we can answer with a yes or no, are probably those that don’t matter, or matter less.” – James Christie, paraphrased

Governance shouldn’t involve day to day operational management by full-time executives. – James Christie

“Comply or explain” vs. “comply or be damned”. – UK vs. US approach to auditing descirbed by James Christie

“Self-defense skill for testers: “citation needed”. Also: Curiosity, Skepticism, Tenacity. – Laurent Bossavit, paraphrased, warning against accepting unsubstantiated claims

“Do not accept mediocrity.” – Keith Klain

“Culture eats strategy for breakfast.” – Keith Klain

“When you start to think about automation for any other reason than to help testing, you might be boxing yourself in. – Iain McCowatt

“Rational thinking is good if you want rational results.” – Ian Rowland

“Thought Feeders.” – Michael Bolton proposed an improvement to the term Thought Leaders

The Good, the Bad and the Ugly

This was the best EuroSTAR to date in my experience. The program was better than ever and more diverse with a good mix of testing philosophies being represented. The facilitated discussions elevated the proceedings and prevented (most) speakers from running away from arguments, questions and contrasting ideas. I also liked that the community and social aspects of the conference appear to have been strengthened since my last EuroSTAR in 2009. The workshops, the do-over session and the community hub were all welcome additions to me. And the test lab looked as brilliant as ever, and I think it was a really neat idea to have it out in the open space it was in, rather than being locked away in a separate room. Expo space well used.

Camera Uploads

While I applaud the improvements, there are still things that bother me about some EuroSTAR fundamentals. The unreasonably large and hard to avoid Expo, which strangely enough is called “the true heart of the conference” in the conference pamphlet, is one such thing. Not having ample (or hardly any) opportunity to sit down and have my lunch at a table is another. Basic stuff, and I think the two are connected. Seated attendees wouldn’t be spending enough time in the Expo, so eating while standing is preferred to give the vendors enough face-time with attendees. To me, this is not only annoying, but I also think it’s actually a disadvantageous setup for both vendors and attendees. My advice: Have the Expo connected to the conference, but off to the side. Make it easy and fun for me to attend the Expo if I choose, but also easy for me to avoid. For attendees, the true heart of any conference is likely about conferring and we would appreciate having a truly free choice of where and how to spend our limited time at what was otherwise a great conference this year.

Oh, and Jerry Weinberg won a luminary award for his contributions to the field over the years. If you develop software and haven’t read his books yet, you’re missing out. He’s a legend, and rightly so. Just saying.

2013-11-06 20.21.49

Finally, if you haven’t had enough of EuroSTAR ramblings yet, my friend Carsten Feilberg has written a blog post of his own about his impressions at EuroSTAR that you can check out, or have a look at Kristoffer Nordström’s dito blog post.

Report from SWET4

The 4th SWET (Swedish Workshop on Exploratory Testing) happened this past weekend at Kilsbergen in Örebro, Sweden. The theme for this 4th edition of the workshop was “Exploratory Testing and Models”, hosted by Rikard Edgren, Tobbe Ryber and Henrik Emilsson (thanks guys!). If you haven’t heard of SWET before, a brief way of describing it would be to say that it’s a peer conference based on the LAWST format where we meet to discuss the ins and outs of Exploratory Testing in order to challenge each other and increase our own understanding of the topic. SWET has many siblings around the world and the family of peer conferences on software testing keeps on growing which is a delightful thing to see! Peer conferences rock. There’s no better way to learn new things about your craft in my mind, than to present an experience report and have it picked apart and challenged by your peers.

Friday (Pre-conference)
Most people arrived on the evening before and spent a couple of hours together eating dinner and chatting over a few drinks. The venue had a lovely common room with a cozy fireplace and comfy chairs so, as usual at these events, several people stayed up chatting happily well into the night without a care.

Saturday (Day 1)
The conference started off with a moment of silence for our friend and testing peer Ola Hyltén who recently passed away in a tragic car accident. Having met Ola myself for the first time at SWET2, that felt like an appropriate way of opening the conference. Then after a round of check-ins the schedule proceeded with the first experience report.

First up was Anna Elmsjö who talked about making use of business and process models. Anna described her process of questioning the diagrams and adding questions and information to the model to keep track of things she wanted to test. Open season contained an interesting thread about requirements where someone stated that it sounded as if Anna’s testing could be seen as a way of adding or sneaking in new requirements, or that someone might feel that she was. A comment on that question pointed out in turn that asking questions about the product doesn’t mean requirements are being added, but that they are being discovered, which is an important distinction to keep in mind in my opinion.

Second presentation was from Maria Kedemo, who talked about what she called model-based exploratory interviewing for hiring testers. Maria works as a test manager and has been heavily involved in recruiting for her employer during the past year. When preparing for the process of hiring, Maria explained, she drew on her testing experiences to see if she could identify some of her habits and skills as a tester and apply them to interviewing, e.g. different ways of searching for and finding new information. My take-aways include some thoughts on how modeling what you already have can help you find out what you really need (not just you want, or think you want). Also, a reaffirmation of the importance of updating your models as your understanding of what you’re modeling increases, sort of like how you would (hopefully) update a plan when reality changes.

Last presentation of the day, Saam Koroorian talked about using the system map, which is a model of the system, to drive testing. He also described how his organization has moved from what he called an activity or artifact-driven kind of testing to more information-driven testing. I interpreted these labels more as descriptors of how the surrounding organization would view testing. Either it’s viewed as an activity that is supposed to provide arbitrary measurements based on artifacts (like test cases) to show some kind of (false) progress, i.e. bad testing, or it’s viewed as an activity that is expected to provide information, i.e. better testing (or simply “testing”).

Saam continued to talk about how his team had adopted James Bach’s low-tech testing dashboard concepts of assessing and showing coverage levels and testing effort of different areas which led to many new green cards (new discussion threads). Among them was a thread of mine that discussed the importance of taking the time dimension into account and how to visualize “freshness” and reliability of information as time passes (assuming the system changes over time). This is something I’ve recently discussed with some other colleagues to solve a similar problem at a client which I found very stimulating. Might turn that into a blog post of its own one day (when the solution is finished).

Saam also noted that as his organization was moving towards an agile transition at the time, sneaking in new thinking and ideas in the testing domain was easier than usual, since the organization was already primed for change in general. Interesting strategy. Whatever works. 🙂

Lightning Talks
Day 1 was concluded  with a 60-minute round of lightning talks, which based on the number of speakers meant that each person got 5 minutes to run their presentations (including questions). Lots of interesting topics in rapid progression,  like an example of how to use free tools to create cheap throw-away test scripts as an exploration aid (James Bach) or how to use the HTSM quality characteristics to discuss quality with customers and figure out their priorities (Sigge Birgisson). Erik Brickarp gave Lightning Talk on visualization that he’s now turned into a blog post over at his blog. My own Lightning Talk was about helping testers break stale mental models and to get out of creative ruts through mix-up testing activities (a.k.a cross-team testing). If I’m not mistaken, I think all participants gave a Lightning Talk if they weren’t already scheduled to give a presentation which was nice. That way everybody got to share at least one or two of their ideas and experiences.

In the evening, the group shared a rather fantastic “Black Rock” dinner after which the discussions continued well into the wee hours of the night, despite my best efforts to get to bed at a reasonable hour for once.

Sunday (Day 2)
After check-in on day 2, the first order of business was to continue through the stack of remaining threads from Saam’s talk that we didn’t have time to get to the day before. I think this is a pretty awesome part of this conference format. Discussions continue until the topic is exhausted, even if we have to continue the following day. There’s no escape. 😉

The first (and only, as it turned out) presentation of day 2 came from James Bach who told a story about how he had done exploratory modeling of a class 3 medical device through the use of its low level design specification to come up with a basis for his subsequent test design. During open season we also got into a lot more information about his overarching test strategy. It was a fascinating story that I won’t go into much detail on here, but you should ask him to tell it to you if you get a chance. You’ll get a lot of aha! moments. My biggest takeaway from that open season discussions was a reaffirmation of something I’ve known for quite some time, but haven’t been able to put into words quite so succinctly: “Formal testing that’s any good is always based on informal testing”. Also worth considering: Informal testing is based in play. As is learning.

Formal testing is like the opening night of a big show. It becomes a success because(/if) it’s been rehearsed. And informal testing provides that rehearsal. Skip rehearsing at your peril.

So how do you go from playing into making formal models? You practice! And according to James, a good way to practice is to start by drawing state models of various systems. Like for instance this über-awesome Flash game. When you’ve modeled the game, you can start to play around with it in order to start generating a rich set of test ideas. Asking “what if”-style questions like “What happens if I go from here to here?” or “I seem to be able to do this action over here, I wonder if I can do it over here as well?” and so on. What factors exist, what factors can exist, which factors matter?

I want to finish off with a final couple of quick take-aways from the weekend. First, a “test case” can be defined as an instance or variation of a test or test idea. By using that definition you’ll be able to encompass many or most of the varying things and containers that people call test cases. And finally, regarding requirements… Challenge the assumption that tests can be derived from the requirements. The tests aren’t in the requirements and thus can’t be derived from them. You can however, construct tests that are relevant in order to test the requirements and obtain information about the product, usually based on risk. While on the subject remember that, usually, requirements > requirements document.

SWET4

Thank you to all the participants at SWET4: Anna Elmsjö, Simon Morley, Tobbe Ryber, Oscar Cosmo, Erik Brickarp, James Bach, Johan Jonasson, Sigge Birgisson, Maria Kedemo, Rikard Edgren, Joakim Thorsten, Martin Jansson, Saam Koroorian, Sandra Camilovic and Henrik Emilsson.

That’s it. That’s all that happened. (No, not really, but I’ll have to save some things for later posts!)

Report from CAST 2012

This year’s Conference of the Association for Software Testing (CAST) is now in the books. I’m returning home with a head full of semi-digested thoughts and impressions (as well as 273 photos in my camera and an undisclosed number of tax free items in my bag) and will briefly try to summarize a few of them here while I try to get back on Europe time.

The Trip
I’m writing this while on the train heading home on the last leg of this trip. Wow, San Jose sure is far away. Including all the trains, flights and layovers… I’d say it’s taken about 24 hours door-to-door, in each direction. That should tell you a bit about how far me and others are willing to go for solid discussions about testing (and I know there are people with even worse itineraries than that).

The Venue
I arrived at the venue a little over a day in advance in order to have some time to fight off that nasty 9 hour jet lag. Checked in to my room. Then immediately switched rooms since the previous guest had forgotten to bring his stuff out, though the hotel’s computer said that the room had been vacated. Still got my bug magnetism in working order apparently.

CAST was held at the Holiday Inn San Jose Airport this year. The place was nice enough. Nothing spectacular, but it did the job. The hotel food was decent and the coffee sucked as badly as ever. Which I expected it would, but… there were no coffee shops within a couple of miles as far as I could tell(!) I’m strongly considering bringing my own java the next time I leave Europe. It’s either that or I’ll have to start sleeping more, which just doesn’t work for me at any testing event.

The Program
I’m not going to comment much on the program itself since I helped put it together. Just wouldn’t make sense since I’d be too biased. I’m sure there will be a number of other CAST blog posts out there soon that will go more in depth (check through my blogroll down in the right hand sidebar for instance). I’ll just say that I got to attend a couple of cool talks on the first day of the conference. One of them was with Paul Holland who talked about his experiences with interviewing testers and the methods he’s been using successfully for the past 100+ interviews. Something I’m very interested in myself. I actually enjoy interviews, from both sides of the table.

The second day I got “stuck” (voluntarily) in a breakout room right after the first morning session. A breakout room is something we use at CAST when a discussion after a session takes too long and there are other speakers who need the room. Rather than stopping a good discussion, we move it to a different room and keep at it as long as it makes sense and the participants have the energy for it. Anyway, this particular breakout featured myself and two or three others who wanted to continue discussing with Cem Kaner after his presentation on Software Metrics. We kept at it up until lunch and after that I was kind of spent, so I opted to “help out” (a.k.a take up space) behind the registration desk for the rest of the day. Which was fun too!

The third day was made up of a number of full day tutorials. I didn’t participate in any of them though, so again you’ll have to check other blogs (or #CAST2012 on Twitter) to catch impressions from them.

Facilitation
CAST makes use of facilitated discussions after each session or keynote. At least one third of the allotted time for any speaker is reserved for discussions. This year I volunteered to facilitate a couple of sessions. I ended up facilitating a few talks in the Emerging Topics track (short talks) as well as a double session workshop. It was interesting, but I think I need to sign-up for more actual sessions next year to really get a good feel for it (Emerging Topics didn’t have a big audience when I was there and the workshop didn’t need much in way of facilitation).

San Jose / San Francisco
We also had time to see a little bit of both San Jose and San Francisco on this trip, which was nice. I only got to downtown San Jose on the Sunday leading up to the conference, so naturally things were a bit quiet. I guess it’s not like that every day of the week(?)

San Francisco turned out to be an interesting place with sharp contrasts. The Mission district, Market Square and Fisherman’s Wharf all had their own personalities and some good and bad things to them. Anyway, good food, nice drinks and good company together with a few other testers can make any place a nice place.

Summary
As with CAST every year, it’s the company of thoughtful, engaged testers that makes CAST great. If you treat it like any other conference and just go to the sessions and then go back to your room without engaging with the rest of the crowd at any point during the day (or night), then I’m afraid you’ll miss out on much of the Good Stuff. Instead, partake in hallway hangouts, late night testing games, informal discussions off in a corner, test your skill in the TestLab with James Lyndsay or join one of the AST’s SIG meetings. That’s when the real fun usually comes out for me. And this year was no exception.

Going to CAST

Next week I’ll be at the Conference of the Association for Software Testing (CAST) in San Jose, CA. The first time I attended CAST in 2009, it quickly became my yearly top priority among conferences to attend. This is a “CONFERence” type conference (high emphasis on community and discussions) which usually produces a lot of blog worthy material for its attendees. I will try to write a couple of brief blog entries while at the conference, but if you want to find out what’s being discussed in “real time”, then tune in to #CAST2012 on Twitter, or check out the live webCAST.

If you’re a regular reader of this (little over a month old) blog, then you know that CAST was one of the inspirations behind the recent Let’s Test conference. CAST has always been a great experience for me and this year’s CAST will be my 4th. So far I have gone home every time with my head filled with new ideas and interesting discussions lingering in my head, waiting to be processed over the following few weeks, and I think this year will be no exception.

This year’s CAST will be the first where I’m taking part in the program committee (together with Anne-Marie Charrett, Sherry Heinze and program chair Fiona Charles) and so I’ve been reading through and evaluating a wide range of great proposals for the workshops and sessions that will make up the first two days of the conference, trying to help put together a really exiting program for this year’s theme, “The Thinking Tester”.

I’ll also be facilitating a few of the workshop and session discussions this year, which will be interesting. In Sweden we’re used to going to conferences to “learn” from the speaker and everybody take turns to ask their questions in a polite (read: timidly) and orderly fashion , much like we do when queuing at the supermarket or movie theater, Swedish style. At CAST on the other hand, it’s not uncommon for the speaker’s message to be challenged and/or questioned thoroughly. Needless to say, to get a discussion of that kind to flow effectively without derailing, good facilitation is key. Facilitation also enables other things, like making sure that more than one or two people get to talk during the Q&A or that discussions stay on topic. I like both that kind of attitude and format, and although I’ve already taken the stage as a speaker at CAST in the past, this will be my first time facilitating “over on that side of the pond”. So yeah, it will be an interesting experience for me for sure.

Looking forward to going there, meeting old friends, listening to interesting talks, facilitating discussions, blogging about it… Looking forward to it all!

Finally, those with a keen eye might have noticed that the headline of this blog has changed recently. The reason is simple… When I resurrected this blog last month, I just put the first thing that came to mind as the headline. Turns out, the first thing that came to mind was the exact same headline as Shmuel Gershon uses on his (well established and well worth reading) testing blog. We can’t have that. Huib Schoots was kind enough to point this out in his most recent blog post, titled “15 test bloggers you haven’t heard about, but you should…“, where incidentally, I’m one of the 15. Most of the other blogs on that list are real gems, by the way. One or two I haven’t heard about myself, so I’ll check them out this summer for sure.

Let’s Test – in retrospect

What just happened? Was I just part of the first-ever European conference on context-driven software testing? It feels like it was only yesterday that was still thinking “this will never happen”, but it happened, and it’s already been over a month now since it did. So maybe it’s time for a quick (sort of) retrospective? Let’s see, where do I begin…?

Almost a year ago, I did something I rarely do. I made a promise. The reason I rarely make promises is because I’m lousy at following a plan and with many if not most promises, there’s planning involved… So making a promise would force me to both make a plan and then follow it. Impossible.

And yet, almost a year ago now, I found myself at the CAST conference in Seattle, standing in front of 200+ people (and another couple of hundred people listening in via webcast I’ve been told) and telling the audience that me and some other people from Sweden were going to put on a conference on context-driven testing in 2012 and that it would be just like CAST, only in Europe. And of course we had it all planned out and ready to be launched! Right…? Well… not… really…

At that point we didn’t have a date set, no venue contract in place, no program that we could market, no funding, no facilitators – heck, we didn’t even really have a proper project team. The people who had been discussing this up until now had only started talking about organizing a conference at the 2nd SWET workshop on exploratory testing in Sweden a couple of months earlier. In my mind, it was all still only on a “Yeah, that would be a neat thing to pull off! We should do that!” level of planning or committment from anyone. At least as far as I was concerned. The other guys might tell you that they had made up their minds long before this, but I don’t think I had.

Anyway, since I was elected (sort of) to go ahead and announce our “plan” (sort of), I guess this is the point were I made up my mind to be a part of what we later named “Let’s Test – The Context-Driven Way” and over the next couple of months we actually got a project team together and became more or less ready to take on what we had already promised (sort of) to do.

Fast forward a couple of months more. So now we have that committed team of 5 people in place, working from 5 different locations around the country (distributed teams, yay!). We have an awesome website, a Twitter account, a shared project Dropbox and some other boring back office stuff in place. The team members are all testers by trade, ready to crete a conference that is truely “by testers, for testers”. Done. What more do we need? Turns out, a conference program is pretty high up on the “must have” list for a conference. Yeah, we should get on that…

I think that this was the point where I started to realize just how much support this idea had out there in the context-driven testing community already. Scott Barber, Michael Bolton and Rob Sabourin were three of our earliest “big name” supporters who had heard our annoucement at CAST, and many testers from the different European testing communities were also cheering for the idea early on, offering support. A bunch of fabulous tutorial teachers and many fantastic testing thinkers and speakers from (literally) all over the world, who we never dreamed would come all the way to Sweden, also accepted our invitations early on. Our call for papers (that I at first feared wouldn’t get many submissions since we were a first-time conference) also rendered a superb yield of excellent proposals. So much so that it was almost impossible to only pick a limited number to put on the program.

So while I can say in retrospect that creating a conference program is no small task, it is a heck of a lot easier when you get as awesome a repsonse and support from the community as we’ve gotten throughout this past year. It did not go unnoticed folks!

After we got the program in place, I was still a bit nervous about the venue and residential conference format. Would people actually like to come to this relatively remote venue and stay there for three days and nights, while basically doing nothing else but talk about testing, or would they become bored and long for a night on the town? I had to remind myself of the reasons we decided to go down this route in the first place: CAST and SWET.

CAST is the annual “Conference of the Association for Sotware Testing” which uses a facilitated discussion format developed through the LAWST workshops. People who come to CAST usually leave saying it’s been one of their best conference experiences ever, in large parts due to (I believe) this format with facilitated discussions after each and every presentation. We borrowed this format for Let’s Test, and with the help of the Association for Software Testing (AST) we were able to bring in CAST head facilitator Paul Holland to offer facilitaiton training to a bunch of brilliant volunteers. Awesome.

SWET is the “Swedish Workshop on Exploratory Testing”, which is a small-scale peer workshop that also uses the LAWST style discussion format. But what makes this sort of gathering different from most regular conferences is that the people who come to the workshop all stay at the same location as the workshop is being held, for one or two consecutive days and nights. So after the workshop has concluded for the day, discussions still don’t stop. People at SWET stay up late and continue to share and debate ideas well into the night, at times using the sunrise as their only cue to get to bed. I believe one of the main reasons for this is… because they can. They don’t have to catch a bus or a cab to go back to their hotel(s) and when given the opportunity to stay up late and talk shop with other people who are as turned on by software testing as they are, they take it. We wanted to see this made possible for about ten times as many people as we usually see at SWET as well. Hence the residential format and extensive evening program at Let’s Test, which I believe is a fairly unusual if not unique format for a conference of this size. At least in our neck of the woods.

In the end, I personally think we were able to offer a nice blend of these two conference models that had inspired us. People weren’t forced to enter into discussions after sessions, but they were always able and encouraged to participate, and in a structured manner (great job all facilitators!). Also, people could choose to go to bed early and recharge their batteries after a long day of conferencing, or they could opt-in for either high energy test lab activities, or a more mellow and laid back art tour around the venue campus (to name but a couple of the well attended evening activities) before heading for the bar. I think I managed to get to bed at around 02.00 AM each night, but I know that some folks stayed up talking for a couple of hours beyond that each night too.

Wrapping up this little retrospective, I’d like to say thank you to our sponsors who, among other things, helped make the evening events such a well appreciated part of the conference experience and who all really engaged actively in the conference, which was something we as organizers really appreciated. Finally, a special shout out to the very professional Runö venue crew and kitchen staff who readily helped us out whenever we needed it. You made the execution of this event a total joy.

I’m very happy about how Let’s Test turned out. It exceeded my own expectations for sure. Judging by the feedback we saw on Twitter during the event, and in the blogosphere afterwards, I’d say it looks like most who attended were pretty ok with the experience as well. Check out the blog links we’ve gathered on the Let’s Test 2012 Recap page and judge for yourselves. Seriously, it’s been extremely rewarding to read through all these blog posts. Thank you for that.

Plans are already well underway for next year’s conference. We’re delighted that both James Bach and Johanna Rothman have signed on to be two of our keynote speakers and we’ll announce a call for proposals sometime after the summer for sure and I encourage all of you who sent something in last year to do so again. Oh, and you can sign up right now for Let’s Test 2013 and catch the advantageous first responder rate. A bunch of people already have, so you’ll be in good company.

One final thing… We know a good deal about what people liked at Let’s Test 2012, but no doubt there are also a few things that we can and should improve. Let us know.

It’s been a pleasure. See you all there next year I hope!