Category Archives: Peer Conferences

Peer conference awesomeness at SWETish

I spent this past weekend at a peer conference. This one marks my 5th peer conference, but it’s been a long while since I was at my 4th. In fact, it’s been four years since SWET4. Peer conferences are awesome though, as they let the participants really go deep and have thorough and meaningful conversations over one or more days in a small enough group that makes such discussions possible.

Ever since I moved to Linköping a few years ago, I had promised to do my part in organizing a local peer conference for the testing community in and around Linköping and this weekend we finally got that project off the ground! We decided to call this particular conference SWETish instead of making it another increment of the regular SWET. The reasons being that we wanted to keep participation first and foremost from within the local communities and the regular SWET conferences invite people from all over country, and also because we wanted to keep the theme broad and inclusive whereas SWET has already been through a fair number of iterations (SWET7 being the latest one) and we sort of didn’t want to break their set of more specific topics in exchange for a general one that has already been covered. Maybe next time we’ll put on “SWET8” though, if nobody in the Meetups.com group beats us to it (hint hint, wink wink, nudge nudge).

So, sort of but not quite a SWET conference i.e. SWETish (with its eponymous Twitter hashtag #SWETish).

The whole thing took place at the Vadstena Abbey Hotel, which is made up of a beautiful set of buildings, some dating back to the 12th, 13th and 14th century. From an organizer standpoint, I can certainly recommend this venue. Nice staff, cozy environment and above average food. And a nice historic atmosphere too of course. (Click the link in the tweet below for a couple of snapshots.)

When I sent out the initial invitations to this peer conference, I had my mind set on getting a total of 15 participants, as that seemed to be a good number of people to ensure that all speakers get plentiful of questions and that there would be a good mix of experiences and viewpoints, while at the same time not being too many people so that everybody gets to participate thoroughly and nobody is forced to sit quiet for too long. However, because a few people who had initially signed up couldn’t make it there in the end, we turned into a group of “only” 10 people. Turns out that’s an excellent number! Most if not all of us there agreed that the low number of participants helped create an environment where everybody got relaxed with each other really quickly which in turn helped when discussions and questions got more critical or pointed, without destroying the mood or productivity of those conversations.

Another pleasant surprise was that we only got through (almost) three presentations + Open Season (facilitated Q&A) during the conference (1,5 days). If memory serves, the average at my past peer conferences is four and sometimes we even start a fifth presentation and Q&A before time runs out. What I liked about us only getting through three is that that is a testament to how talkative and inquisitive the group was, even though 5 out of 10 participants were at their first ever peer conference! I facilitated the first presentation myself and so I can tell you that in that session alone we had 11 unique discussion threads (green cards) and 48 follow-up questions (yellow cards), plus quite a few legit red cards. So for those of you familiar with the k-cards facilitation system, you can tell that this wasn’t a quiet group who only wanted to listen to others speak. Which is great, because that’s the very thing that makes peer conferences so fantastically rewarding.


Apart from the facilitated LAWST-style sessions, we also spent 1 hour on Lightning Talks, to make sure that everyone got to have a few minutes of “stage time” to present something of their own choosing.

The evening was spent chatting around the dinner table, in the SPA and in smaller groups throughout the venue until well past midnight. And even though we’d spent a full day talking about testing, most of the conversations were still about testing! How awesome is that?

If you want to read more about what was actually said during the conference, I suggest you check out the Twitter hashtag feed, or read Erik Brickarp’s report that goes more into the content side of things. This blog post is/was more about evangelizing about the concept itself and provide some reflections from an organizer perspective. Maybe I should have mentioned that at the start? Oops.

A peer conference is made possible by the active participation of each and every member of the conference, and as such, credit for all resulting material, including this blog post, goes to the entire group. Namely, and in alphabetical order:

  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson

Thank you to all the participants, including the few of you who wanted to be there but couldn’t for reasons outside of your control. Next time! And thank you to my partners in crime in the organizing committee: Erik Brickarp, Anna Elmsjö and Björn Kinell.

There! You’ve now reached the end of my triennial blog post. See you in another three years! Actually, hopefully I’ll see you much sooner. The powerful dip in my blogging frequency has partly been due to the continuous deployment of new family members in recent years, which has forced me to cut back on more than one extracurricular activity.

Post below in the comments section if you have comments or questions about peer conferences, or want some help organizing one. I’d be happy to point you in the right direction!

Report from SWET4

The 4th SWET (Swedish Workshop on Exploratory Testing) happened this past weekend at Kilsbergen in Örebro, Sweden. The theme for this 4th edition of the workshop was “Exploratory Testing and Models”, hosted by Rikard Edgren, Tobbe Ryber and Henrik Emilsson (thanks guys!). If you haven’t heard of SWET before, a brief way of describing it would be to say that it’s a peer conference based on the LAWST format where we meet to discuss the ins and outs of Exploratory Testing in order to challenge each other and increase our own understanding of the topic. SWET has many siblings around the world and the family of peer conferences on software testing keeps on growing which is a delightful thing to see! Peer conferences rock. There’s no better way to learn new things about your craft in my mind, than to present an experience report and have it picked apart and challenged by your peers.

Friday (Pre-conference)
Most people arrived on the evening before and spent a couple of hours together eating dinner and chatting over a few drinks. The venue had a lovely common room with a cozy fireplace and comfy chairs so, as usual at these events, several people stayed up chatting happily well into the night without a care.

Saturday (Day 1)
The conference started off with a moment of silence for our friend and testing peer Ola Hyltén who recently passed away in a tragic car accident. Having met Ola myself for the first time at SWET2, that felt like an appropriate way of opening the conference. Then after a round of check-ins the schedule proceeded with the first experience report.

First up was Anna Elmsjö who talked about making use of business and process models. Anna described her process of questioning the diagrams and adding questions and information to the model to keep track of things she wanted to test. Open season contained an interesting thread about requirements where someone stated that it sounded as if Anna’s testing could be seen as a way of adding or sneaking in new requirements, or that someone might feel that she was. A comment on that question pointed out in turn that asking questions about the product doesn’t mean requirements are being added, but that they are being discovered, which is an important distinction to keep in mind in my opinion.

Second presentation was from Maria Kedemo, who talked about what she called model-based exploratory interviewing for hiring testers. Maria works as a test manager and has been heavily involved in recruiting for her employer during the past year. When preparing for the process of hiring, Maria explained, she drew on her testing experiences to see if she could identify some of her habits and skills as a tester and apply them to interviewing, e.g. different ways of searching for and finding new information. My take-aways include some thoughts on how modeling what you already have can help you find out what you really need (not just you want, or think you want). Also, a reaffirmation of the importance of updating your models as your understanding of what you’re modeling increases, sort of like how you would (hopefully) update a plan when reality changes.

Last presentation of the day, Saam Koroorian talked about using the system map, which is a model of the system, to drive testing. He also described how his organization has moved from what he called an activity or artifact-driven kind of testing to more information-driven testing. I interpreted these labels more as descriptors of how the surrounding organization would view testing. Either it’s viewed as an activity that is supposed to provide arbitrary measurements based on artifacts (like test cases) to show some kind of (false) progress, i.e. bad testing, or it’s viewed as an activity that is expected to provide information, i.e. better testing (or simply “testing”).

Saam continued to talk about how his team had adopted James Bach’s low-tech testing dashboard concepts of assessing and showing coverage levels and testing effort of different areas which led to many new green cards (new discussion threads). Among them was a thread of mine that discussed the importance of taking the time dimension into account and how to visualize “freshness” and reliability of information as time passes (assuming the system changes over time). This is something I’ve recently discussed with some other colleagues to solve a similar problem at a client which I found very stimulating. Might turn that into a blog post of its own one day (when the solution is finished).

Saam also noted that as his organization was moving towards an agile transition at the time, sneaking in new thinking and ideas in the testing domain was easier than usual, since the organization was already primed for change in general. Interesting strategy. Whatever works. 🙂

Lightning Talks
Day 1 was concluded  with a 60-minute round of lightning talks, which based on the number of speakers meant that each person got 5 minutes to run their presentations (including questions). Lots of interesting topics in rapid progression,  like an example of how to use free tools to create cheap throw-away test scripts as an exploration aid (James Bach) or how to use the HTSM quality characteristics to discuss quality with customers and figure out their priorities (Sigge Birgisson). Erik Brickarp gave Lightning Talk on visualization that he’s now turned into a blog post over at his blog. My own Lightning Talk was about helping testers break stale mental models and to get out of creative ruts through mix-up testing activities (a.k.a cross-team testing). If I’m not mistaken, I think all participants gave a Lightning Talk if they weren’t already scheduled to give a presentation which was nice. That way everybody got to share at least one or two of their ideas and experiences.

In the evening, the group shared a rather fantastic “Black Rock” dinner after which the discussions continued well into the wee hours of the night, despite my best efforts to get to bed at a reasonable hour for once.

Sunday (Day 2)
After check-in on day 2, the first order of business was to continue through the stack of remaining threads from Saam’s talk that we didn’t have time to get to the day before. I think this is a pretty awesome part of this conference format. Discussions continue until the topic is exhausted, even if we have to continue the following day. There’s no escape. 😉

The first (and only, as it turned out) presentation of day 2 came from James Bach who told a story about how he had done exploratory modeling of a class 3 medical device through the use of its low level design specification to come up with a basis for his subsequent test design. During open season we also got into a lot more information about his overarching test strategy. It was a fascinating story that I won’t go into much detail on here, but you should ask him to tell it to you if you get a chance. You’ll get a lot of aha! moments. My biggest takeaway from that open season discussions was a reaffirmation of something I’ve known for quite some time, but haven’t been able to put into words quite so succinctly: “Formal testing that’s any good is always based on informal testing”. Also worth considering: Informal testing is based in play. As is learning.

Formal testing is like the opening night of a big show. It becomes a success because(/if) it’s been rehearsed. And informal testing provides that rehearsal. Skip rehearsing at your peril.

So how do you go from playing into making formal models? You practice! And according to James, a good way to practice is to start by drawing state models of various systems. Like for instance this über-awesome Flash game. When you’ve modeled the game, you can start to play around with it in order to start generating a rich set of test ideas. Asking “what if”-style questions like “What happens if I go from here to here?” or “I seem to be able to do this action over here, I wonder if I can do it over here as well?” and so on. What factors exist, what factors can exist, which factors matter?

I want to finish off with a final couple of quick take-aways from the weekend. First, a “test case” can be defined as an instance or variation of a test or test idea. By using that definition you’ll be able to encompass many or most of the varying things and containers that people call test cases. And finally, regarding requirements… Challenge the assumption that tests can be derived from the requirements. The tests aren’t in the requirements and thus can’t be derived from them. You can however, construct tests that are relevant in order to test the requirements and obtain information about the product, usually based on risk. While on the subject remember that, usually, requirements > requirements document.

SWET4

Thank you to all the participants at SWET4: Anna Elmsjö, Simon Morley, Tobbe Ryber, Oscar Cosmo, Erik Brickarp, James Bach, Johan Jonasson, Sigge Birgisson, Maria Kedemo, Rikard Edgren, Joakim Thorsten, Martin Jansson, Saam Koroorian, Sandra Camilovic and Henrik Emilsson.

That’s it. That’s all that happened. (No, not really, but I’ll have to save some things for later posts!)