Category Archives: Networking

Peer conference awesomeness at SWETish

I spent this past weekend at a peer conference. This one marks my 5th peer conference, but it’s been a long while since I was at my 4th. In fact, it’s been four years since SWET4. Peer conferences are awesome though, as they let the participants really go deep and have thorough and meaningful conversations over one or more days in a small enough group that makes such discussions possible.

Ever since I moved to Linköping a few years ago, I had promised to do my part in organizing a local peer conference for the testing community in and around Linköping and this weekend we finally got that project off the ground! We decided to call this particular conference SWETish instead of making it another increment of the regular SWET. The reasons being that we wanted to keep participation first and foremost from within the local communities and the regular SWET conferences invite people from all over country, and also because we wanted to keep the theme broad and inclusive whereas SWET has already been through a fair number of iterations (SWET7 being the latest one) and we sort of didn’t want to break their set of more specific topics in exchange for a general one that has already been covered. Maybe next time we’ll put on “SWET8” though, if nobody in the Meetups.com group beats us to it (hint hint, wink wink, nudge nudge).

So, sort of but not quite a SWET conference i.e. SWETish (with its eponymous Twitter hashtag #SWETish).

The whole thing took place at the Vadstena Abbey Hotel, which is made up of a beautiful set of buildings, some dating back to the 12th, 13th and 14th century. From an organizer standpoint, I can certainly recommend this venue. Nice staff, cozy environment and above average food. And a nice historic atmosphere too of course. (Click the link in the tweet below for a couple of snapshots.)

When I sent out the initial invitations to this peer conference, I had my mind set on getting a total of 15 participants, as that seemed to be a good number of people to ensure that all speakers get plentiful of questions and that there would be a good mix of experiences and viewpoints, while at the same time not being too many people so that everybody gets to participate thoroughly and nobody is forced to sit quiet for too long. However, because a few people who had initially signed up couldn’t make it there in the end, we turned into a group of “only” 10 people. Turns out that’s an excellent number! Most if not all of us there agreed that the low number of participants helped create an environment where everybody got relaxed with each other really quickly which in turn helped when discussions and questions got more critical or pointed, without destroying the mood or productivity of those conversations.

Another pleasant surprise was that we only got through (almost) three presentations + Open Season (facilitated Q&A) during the conference (1,5 days). If memory serves, the average at my past peer conferences is four and sometimes we even start a fifth presentation and Q&A before time runs out. What I liked about us only getting through three is that that is a testament to how talkative and inquisitive the group was, even though 5 out of 10 participants were at their first ever peer conference! I facilitated the first presentation myself and so I can tell you that in that session alone we had 11 unique discussion threads (green cards) and 48 follow-up questions (yellow cards), plus quite a few legit red cards. So for those of you familiar with the k-cards facilitation system, you can tell that this wasn’t a quiet group who only wanted to listen to others speak. Which is great, because that’s the very thing that makes peer conferences so fantastically rewarding.


Apart from the facilitated LAWST-style sessions, we also spent 1 hour on Lightning Talks, to make sure that everyone got to have a few minutes of “stage time” to present something of their own choosing.

The evening was spent chatting around the dinner table, in the SPA and in smaller groups throughout the venue until well past midnight. And even though we’d spent a full day talking about testing, most of the conversations were still about testing! How awesome is that?

If you want to read more about what was actually said during the conference, I suggest you check out the Twitter hashtag feed, or read Erik Brickarp’s report that goes more into the content side of things. This blog post is/was more about evangelizing about the concept itself and provide some reflections from an organizer perspective. Maybe I should have mentioned that at the start? Oops.

A peer conference is made possible by the active participation of each and every member of the conference, and as such, credit for all resulting material, including this blog post, goes to the entire group. Namely, and in alphabetical order:

  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson

Thank you to all the participants, including the few of you who wanted to be there but couldn’t for reasons outside of your control. Next time! And thank you to my partners in crime in the organizing committee: Erik Brickarp, Anna Elmsjö and Björn Kinell.

There! You’ve now reached the end of my triennial blog post. See you in another three years! Actually, hopefully I’ll see you much sooner. The powerful dip in my blogging frequency has partly been due to the continuous deployment of new family members in recent years, which has forced me to cut back on more than one extracurricular activity.

Post below in the comments section if you have comments or questions about peer conferences, or want some help organizing one. I’d be happy to point you in the right direction!

Report from EuroSTAR 2013

I had the opportunity to speak at EuroSTAR this year, which made the decision to go a bit easier than it normally is. After all, EuroSTAR is a pretty pricey party to attend compared to many other conferences, such as e.g. Øredev which ran almost in parallel with EuroSTAR this year.

Anyway, this is a brief report from the conference with some of my personal take-aways, impressions and opinions about the whole thing.

People

First of all, to me, conferences is about the people you meet there. Sure, it’s good if there’s an engaging program and properly engaged speakers, but my main take-aways are usually from the hallway hangouts or late night discussions with whoever happens to be up for a chat. This year, I think the social aspect at EuroSTAR was great. I’ve been to EuroSTAR twice before, in 2008 and 2009, but this was the first one where I didn’t think that the size of the conference got in the way of meeting new and old friends. It also made a bit proud to see that the actual discussions and open seasons seemed pretty much dominated by the people in my community this year. This I think has to do with the way we normally interact with each other, and not because of any bias in the program. On the contrary, I think the program committee had put together a very well-balanced program with a lot of different view and testing philosophies being represented.

Tutorials

The first day and a half at EuroSTAR were devoted to tutorials. I rarely attend tutorials, unless I know they will be highly experiential, but this year I opted for one with relevance to my current testing field, medical software, namely “Questioning Auditors Questioning Testing, Or How To Win Friends And Influence Auditors” with James Christie. My main take-aways were not about how to relate to auditors though, but rather how to think about risk. James pointed out that a lot of times, we use variations of this traditional model to assess risk:

Risk Matrix

The problem with that model is that it scores high impact/low probability risks and low impact/high probability risks the same. Sure, if something is likely to happen we’d probably want to take care of that risk, even if it has only “low” impact. But is that really as important as fixing something that would be catastrophic but has only a small risk probability of happening? Sometimes yes, sometimes no, right? Either way, the model is too simplistic. The problem lies in our (in)ability to perceive and assess risk. Something I think is illustrated quite nicely in the following table.


O’Riordan, T, and Cox, P. 2001. Science, Risk, Uncertainty and Precaution. University of Cambridge.

This is an area I feel I want to dig deeper into. If you have any tips on reading material, please share in the comments.

By the way, James Christie also has a blog that I’ve started reading myself quite recently. His latest blog post is a real nugget for sure: Testing standards? Can we do better?

Keynotes & Sessions

New for this year of EuroSTAR is that the conference chair (Michael Bolton) had pushed for the use of K-cards and facilitated discussions after each talk and keynote. 30 minutes talks, 15 minutes of open season Q&A. Nice! I think that’s a very important improvement for EuroSTAR (though full hour slots would be even better). I mean, if you’re not given the opportunity to challenge a speaker on what he/she is saying, then what’s the point? Argument is a very important tool if we want to move our field forward, and it’s so rare that we in the global testing community get to argue face to face. We need facilitated discussions at every conference, not just a few. I’m glad to see that EuroSTAR is adopting what started at the LAWST peer workshops, and I do hope they stick with it!

All in all, I think the best sessions out of those I attended were:

Laurent Bossavit, who made a strong case against accepting unfounded claims. He did for instance bring up the age old “truth” about how fixing a bug becomes exponentially more expensive as it escapes from the requirements phase into the design phase (and so on) of a project. Turns out, the evidence for that truth is fairly poor, and only applies to certain types of bugs.

Keith Klain, who talked about overcoming organizational biases. His 5 point to follow when attempting to change company culture: 1. Determine your value system. 2. Define principles underpinned by your values. 3. Create objectives aligned to your business. 4. Be continually self-reflective. 5. Do not accept mediocrity. Changing culture is hard, but you might want (or need) to do it anyway. If you do, keep in mind point number 6: Manage your own expectations.

Ian Rowland, who gave a very entertaining talk about the power of IT, “Impossible Thinking”. Seemingly similar to lateral thinking (Edward DeBono), Impossible Thinking challenges you to not stop thinking about something just because it appears impossible, but rather move past that limitation and think about how the impossible might become possible. The thinking style can also be used to provoke creative thinking or new solutions, like how thinking about a phone that can only call 5 different phone numbers (a ridiculous idea at first glance) provoked the creation of a mobile subscription plan that let you call 5 friends free of charge. An idea that allegedly boosted sales for that particular carrier in a way that left competitors playing catch-up for months.

Rob Lambert, gave an experience report where he described in detail his company’s journey moving from releasing only a couple of times per year, down to releasing once per week. It was a very compelling story, but unfortunately I find myself currently working in a very different context. True experience reports are always a treat though.

Then of course I had my own presentation: Test Strategy – Why Should You Care?, where I tried to expand a bit on four main points: 1. Why most strategies I’ve seen are terrible and not helpful. 2. A model for thinking about strategies in a way that they might become helpful. 3. Characteristics of good strategy. 4. Arguments why you should care about good strategy. All in all, apart from maybe trying to pack too much into 30 minutes, I think it went ok. The room was packed too, which was nice.

I did sample quite a number of other sessions too, but it’s difficult to sum them all up with any sort of brevity so I won’t even try. Instead, I’ll provide a few quote from the talks I found the most rewarding:

“If it hurts, keep doing it.” – Rob Lambert (Learning/change heuristic)

Condense all the risks of the corporation into a single metric.” – Rick Buy, Enron (anti-pattern)

“Reality isn’t binary. We don’t know everything in advance. Observe, without a hypothesis to nullify.” – Rikard Edgren

“The questions we can answer with a yes or no, are probably those that don’t matter, or matter less.” – James Christie, paraphrased

Governance shouldn’t involve day to day operational management by full-time executives. – James Christie

“Comply or explain” vs. “comply or be damned”. – UK vs. US approach to auditing descirbed by James Christie

“Self-defense skill for testers: “citation needed”. Also: Curiosity, Skepticism, Tenacity. – Laurent Bossavit, paraphrased, warning against accepting unsubstantiated claims

“Do not accept mediocrity.” – Keith Klain

“Culture eats strategy for breakfast.” – Keith Klain

“When you start to think about automation for any other reason than to help testing, you might be boxing yourself in. – Iain McCowatt

“Rational thinking is good if you want rational results.” – Ian Rowland

“Thought Feeders.” – Michael Bolton proposed an improvement to the term Thought Leaders

The Good, the Bad and the Ugly

This was the best EuroSTAR to date in my experience. The program was better than ever and more diverse with a good mix of testing philosophies being represented. The facilitated discussions elevated the proceedings and prevented (most) speakers from running away from arguments, questions and contrasting ideas. I also liked that the community and social aspects of the conference appear to have been strengthened since my last EuroSTAR in 2009. The workshops, the do-over session and the community hub were all welcome additions to me. And the test lab looked as brilliant as ever, and I think it was a really neat idea to have it out in the open space it was in, rather than being locked away in a separate room. Expo space well used.

Camera Uploads

While I applaud the improvements, there are still things that bother me about some EuroSTAR fundamentals. The unreasonably large and hard to avoid Expo, which strangely enough is called “the true heart of the conference” in the conference pamphlet, is one such thing. Not having ample (or hardly any) opportunity to sit down and have my lunch at a table is another. Basic stuff, and I think the two are connected. Seated attendees wouldn’t be spending enough time in the Expo, so eating while standing is preferred to give the vendors enough face-time with attendees. To me, this is not only annoying, but I also think it’s actually a disadvantageous setup for both vendors and attendees. My advice: Have the Expo connected to the conference, but off to the side. Make it easy and fun for me to attend the Expo if I choose, but also easy for me to avoid. For attendees, the true heart of any conference is likely about conferring and we would appreciate having a truly free choice of where and how to spend our limited time at what was otherwise a great conference this year.

Oh, and Jerry Weinberg won a luminary award for his contributions to the field over the years. If you develop software and haven’t read his books yet, you’re missing out. He’s a legend, and rightly so. Just saying.

2013-11-06 20.21.49

Finally, if you haven’t had enough of EuroSTAR ramblings yet, my friend Carsten Feilberg has written a blog post of his own about his impressions at EuroSTAR that you can check out, or have a look at Kristoffer Nordström’s dito blog post.

Trying on hats

After having missed out on a couple of EAST gatherings lately, I finally managed to make it to the this month’s meetup this past Thursday (the group’s 11th meetup since its inception, for those who like to keep scores). This meetup was a bit different that past ones, in a good way. Not that the other ones haven’t been good, but it’s fun to mix things up. The plan for the evening was to study, implement and evaluate a Edwards de Bono’s Six Thinking Hats technique in a testing situation. The six thinking hats is basically a tool to help both group discussions and individual thinking by using imagined (or real) hats of different colors to force your thinking in certain directions throughout a meeting or workshop. Another cool thing at this meetup was that there were at least a handful of new faces in the room. We’re contagious, yay!

We started out by watching Julian Harty’s keynote address from STARWEST 2008, “Six Thinking Hats for Software Testers”. In this talk, Julian explains how he had successfully implemented Edward de Bono’s technique when he was at Google and how it helped them getting rid of limiting ideas, poor communication, and pre-set roles and responsibilities in discussions and meetings.

So what can we use these hats for? Julian suggests a few areas in his talk:

  • Improving our working relations, by helping reduce the impact of adversarial relationships and in-fighting.
  • Reviewing artifacts like documents, designs, code, test plans and so on.
  • Designing test cases, where the tool helps us to ask questions from 6 distinct viewpoints.

Julian recommends starting and ending with the the Blue Hat, which is concerned with thinking about the big picture. Then continuing forward with the Yellow Hat, which symbolizes possibilities and optimism. The Red Hat, symbolizing passion and feelings. The White Hat, which calls for the facts and nothing but the facts (data). The Black Hat, the critiquing devil’s advocate hat, which looks out for dangers and risks. And finally, after going through all the other hats to help us understand the problem domain, we move on to the Green Hat, which let’s us get creative, brainstorm and use the power of “PO”.

PO stands for provocative operation and is another one of de Bono’s useful tools that helps us get out of ruts. If you find yourself stuck in a thinking pattern, you have someone throw in a PO, in order to help people get unstuck and think along new lines.

There are five different methods for generating a PO: Reversing, Exaggerating, Distorting, Escaping and Wishful Thinking. All of them encourages you to basically “unsettle your mind”, thereby increasing the chances that you will generate a new idea (a.k.a “movement” in the de Bono-verse). You can get a brief primer here if you’re interested in learning more, though I do recommend going straight for de Bono’s books instead. Now, we didn’t discuss PO much during the meetup, but it reminded me to go back and read up on these techniques afterwards. Would be fun to try out in sprint planning or when breaking down larger test ideas.

So after we’d watched the video through, we proceeded to test a little mobile cloud application that had been developed by a local company here in Linköping. The idea was to try to implement the six hats way of thinking while pair testing, which was a cool idea, but it soon became clear that we needed to tour the application a bit first in order to apply the six hats. Simply going through the six hats while trying to think about a problem domain you know nothing about didn’t really work. Also, bugs galore, so there wasn’t much really need to get creative about test ideas. Still, a good exercise that primed our thinking a bit.

Afterwards we debriefed the experience in the group and I think that most of us felt that this might be a useful tool to put in our toolbox, alongside other heuristics. When doing test planning for an application that you know a bit more about, it will probably be easier to do the six hats thinking up front. With an unknown application, you tend to fall back to using other heuristics and then putting your ideas into one of the six hats categories after the fact, rather than using the hats to come up with ideas.

I also think the six hats would be very useful together with test strategy heuristics like SFDPOT, examining each product element with the help of the hats, to give your thinking extra dimensions. Same principle as you would normally use with CRUSSPIC STMPL (the quality characteristics heuristic) together with SFDPOT. Or why not try all three at the same time?

As usual, a very successful and rewarding EAST meetup. Sitting down with peers in a relaxed environment (outside business hours) can really do wonders to get your mind going in new directions.

For a more in depth look on the original idea of the hats, see Edward de Bono’s books Six Thinking Hats (1985), or Lateral Thinking: A Textbook of Creativity (2009), which describes them pretty well as well if I remember correctly.

Edit: If you want to read less about the hats and more about how the meetup was actually structured (perhaps you want to start your own testing meetups?), head on over to Erik Brickarp’s blog post on this same meetup.

EAST meetup #7

Last night, EAST (the local testing community in Linköping) had its 7th “official” meetup (not counting summer pub crawls and the improvised restaurant meetup earlier this fall). A whopping 15 people from opted to prolong their workday by a few hours and gather to talk about testing in inside Ericsson’s facilities in Mjärdevi (hosting this time, thanks to Erik Brickarp). Here’s a short account of what went down.

First presentation of the night was me talking about the past summer’s CAST conference and my experiences from that. The main point of the presentation was to give people who didn’t know about CAST before an idea of what makes CAST different from “other conferences” and why it might be worth considering attending from a professional development standpoint. CAST is the conference of the Association for Software Testing. A non-profit organization with a community made up lots of cool people and thinking testers. That alone usually makes the conference worth attending. But, naturally I’m a bit biased.

If you want to know more about CAST, you can find some general information on the AST web and CAST 2012 in particular has been blogged about by several people, including myself.

Second presentation was from Victoria Jonsson and Jakob Bernhard who gave their experience report from the course “The Whole Team Approach to Agile Testing” with Janet Gregory that they had attended a couple of months ago in Gothenburg.

There were a couple of broad topics covered. All had a hint of the agile testing school to them, but from the presentation and discussions that followed, I got the impression that the “rules” had been delivered as good rather than best practices, with a refreshingly familiar touch of “it depends”. A couple of the main topics (as I understood them) were:

  • Test automation is mandatory for agile development
    • Gives more time for testers to do deeper manual testing and focus on what they do best (explore).
    • Having releases often is not possible without an automated regression test suite.
    • Think of automated tests as living documentation.
  • Acceptance Testing could/should drive development
    • Helps formulating the “why”.
    • [Comment from the room]: Through discussion, it also helps with clarifying what we mean by e.g. “log in” in a requirement like “User should be able to log in”.
  • Push tests “lower” and “earlier”
    • Aim to support the development instead of breaking the product [at least early on, was my interpretation].
    • [Discussion in the room]: This doesn’t mean that critical thinking has to be turned off while supporting the team. Instead of breaking the product, transfer the critical thinking elsewhere e.g. the requirements/user stories and analyze critically, asking “what if” questions.
    • Unit tests should take care of task level testing, Acceptance tests handles story level testing and GUI-tests should live on a feature level. [Personally, and that was also the reaction of some people in the room, this sounds a bit simplified. Might not be meant to be taken literally.]

There was also a discussion about test driven development and some suggestions of good practices came up, like for instance how testers on agile teams should start a sprint by discussing test ideas with the programmer(s), outlining the initial test plan for them. That way, the programmer(s) can use those ideas, together with their own unit tests, as checks to drive their design and potentially prevent both low and high level bugs in the process. In effect, this might also help the tester receive “working software” that is able to withstand more sapient exploratory testing and the discussion process itself also helps to remove confusion and assumptions surrounding the requirements that might differ between team members. Yep, communication is good.

All in all, a very pleasant meetup. If you’re tester working in the region (or if you’re willing to travel) and want to join for the next meetup, drop me an e-mail or comment here on the blog and I’ll provide information and give you a heads up when the next date is scheduled.

EAST meetup #6

About 6 months ago, a few testing peers in Linköping, Sweden, started  a local competence network group that we named “EAST”. The name itself doesn’t mean anything, but suggests that we reside in the south east parts of Sweden and that we’re welcoming people from all around these parts, not just our little town.

The idea was to get people from different organizations and companies together to talk about test and to help each other learn more about testing by sharing knowledge and experiences with each other.

So this past Monday I attended the 6th meetup with the group. We’ve gone through a couple of meetings in the early stages where we got to know each other and talked about what we all do at our respective companies in terms of testing. By the 3rd or 4th meetup, we were starting to have more prepared themes for each meetup and this time we actually had two separate themes prepared.

First, we got to listen to Hanna Germundsson who presented a software testing thesis she’s working on, geared towards test processes and minimizing testing related risks. We got to ask questions and also had time for some open conversations in the group about the different questions Hanna presented. There will be follow ups to this one for sure. Haven’t seen that many software testing theses before. Very cool.

For the second part of the meetup, a couple of people talked about their experiences from the Let’s Test 2012 conference. While it’s of course fantastic to listen to people talking about this conference that I helped organize as an event in general, it was even more cool to listen to them describing specific take-aways and learnings from the different tutorials and sessions they attended. Check out the Let’s Test 2012 archives page for slides and other material related to the conference.

EAST will take a break over the summer, but we’ll be back in force this fall. If you live nearby and want to participate then I suggest you join the EAST LinkedIn group to stay in touch with news and announcements. Or follow @test_EAST on Twitter. Or both.