Category Archives: Peer Conferences

Peer Conference: How can we convince everyone to prioritize testing?

In the evening the day before DevLin 2018, a small band of merry test specialists in the Linköping area gathered for a short peer conference session on the topic: “How can we convince everyone to prioritize testing?”

“Everyone” in this case primarily means people working in other disciplines than testing, e.g. product owners, managers, programmers, requirements analysts, system engineers, and so on. All testers have probably many times experienced the difficulties involved in getting someone else to understand the importance of correctly weighing the need for thorough testing against the demands for quicker releases, more features and faster time to market. With that background, and armed with a few examples to get us on the right track, this peer conferences was ready to start.

To kick things off, James Bach gave a short presentation of his recent and yet-to-be-published work that him and Michael Bolton have done on their updated version of the Agile Testing Quadrants (from 2014), which both had a couple of new additions, but which was also easier to explain than the previous version.

After the presentation, we initiated a k-card facilitated discussion about a broad collection of thoughts and reactions to that presentation and the general topic for the night. These are some of the threads (definitely not all) that I’m pulling from memory:

What does it mean to do deep testing? Is there an implicit level of coverage associated with claiming that you’re doing deep testing? Opinions differed from everywhere between “deep testing has happened when you can assert that you “know” you’ve found all important/significant bugs in a given area” and “deep testing can occur on a very limited set of variables for a given function or quality aspect in a larger scope of mostly shallow testing”. (These are not meant to be exact quotes. Interpretation and emphasis are mine.)

We also talked about combining multiple testing activities from multiple quadrants, e.g. testing designed to answer the question “Did we build what we think we built” together with deep testing designed to reveal and provide “knowledge of every important bug”. While it is normally the case that these two types of testing activities can be done in parallel quite successfully, we still spent a good while discussing contexts where there two may not be suitable to run in parallel and what to do instead. In a context where a lot of big changes are happening rapidly, or where there is general chaos, deep testing might not be an efficient use of resources at that certain point in time. Testers working in that domain might do well to consciously move into a preparation domain to perform activities that help us test more efficiently: testability advocacy, analysis, specification, test data generation, constructing test environments, etc. This type of movement reminded me of the dynamics of Cynefin and would in my mind be a type of movement that one could make both voluntarily or involuntarily depending on the circumstances surrounding the tester.

Another extremely fascinating discussion was on the concept of Critical distance and its relationship with Social distance. From Bach/Bolton: “Critical Distance refers to the difference between one perspective and another. Testing benefits from diverse perspectives. Testing benefits from diverse perspectives. Shallow testing is tractable at a close critical distance, whereas deeper or naturalistic long‐form testing tends to require or create more distance from the builder’s mindset.”

In other words, you want and need a fair bit of critical distance in order to do deep testing, but in order to work well with others and build rapport with the people who built the thing your testing, you want a close social distance. The problem is that critical distance and social distance go hand in hand. They are more or less bungee chorded to each other, which creates a an interesting trade-off. As your critical distance increases, so does your social distance, and vice versa. Decrease social distance, and you risk decreasing your critical distance. On the other hand, a certain amount of social distance is necessary both to be able to gain information about the thing being built, and to not be seen as a socially inept weirdo who no one listens to anyway. It’s all about finding the sweet-spot. (And there are of course exceptions and things that can be done to increase critical distance without negatively impact social distance in the workplace, though maybe not always easily.)

Getting programmers on board with testing is something that many of us have tried in the past, and have fairly good experience with, and as such it was hardly addressed head on as far as I can remember(?). Pairing, sharing test techniques knowledge and discussing the concept and specifics of testability and its benefits for both disciplines are examples of ways to get programmers more susceptible to “testing talk”.

Finally, we spent some time discussing how to get management to understand the importance of testing. This is sometimes a difficult nut to crack. I myself find that it can be valuable and fruitful to talk to management about various ways to look at quality (e.g. quality criteria) and how much of the risk associated with many quality criteria will never be written down in checkable requirements, but must be discovered through exploration and deep testing. It was also pointed out in the group that some domain specific examples and examples of bugs that have been covered lately in the news can also be a good way to get their attention. And a third way, which is easier said than done, is to achieve high credibility with management, which will make it more likely that they will listen to you when you try to raise awareness of the importance of testing.

Achieving credibility can either be done by doing a good job over time, or by doing an exceptionally thorough and excellent job on a single task that has the potential for high visibility, in which case it’s worth going the extra mile in order to be able to cash in those credibility chips later on.

Like I’ve already stated, there were many more topics covered that for the moment escapes my memory, but all-in-all, this evening was for me an awesome example of how much value can be squeezed out of only a few (~3) hours when a small peer group sit down to discuss big topics with a lot of passion. Good fun and great company too. I will definitely try to help schedule these types of sit-downs more often.

Thank you to all participants for a great evening, and a special thank you to to Agnetha and Erik for co-organizing the evening together with me, and to Rebecca and Morgan for providing the room, and to James Bach for joining us while in town.

Credit for the contents of this post belongs to the contributors of this peer conference:  Johan Jonasson, Morgan Filipsson, Rebecca Källsten, Erik Brickarp, Agnetha Bennstam, Magnus Karlsson, Anders Elm, James Bach & Martin Gladh

Peer conference awesomeness at SWETish

I spent this past weekend at a peer conference. This one marks my 5th peer conference, but it’s been a long while since I was at my 4th. In fact, it’s been four years since SWET4. Peer conferences are awesome though, as they let the participants really go deep and have thorough and meaningful conversations over one or more days in a small enough group that makes such discussions possible.

Ever since I moved to Linköping a few years ago, I had promised to do my part in organizing a local peer conference for the testing community in and around Linköping and this weekend we finally got that project off the ground! We decided to call this particular conference SWETish instead of making it another increment of the regular SWET. The reasons being that we wanted to keep participation first and foremost from within the local communities and the regular SWET conferences invite people from all over country, and also because we wanted to keep the theme broad and inclusive whereas SWET has already been through a fair number of iterations (SWET7 being the latest one) and we sort of didn’t want to break their set of more specific topics in exchange for a general one that has already been covered. Maybe next time we’ll put on “SWET8” though, if nobody in the Meetups.com group beats us to it (hint hint, wink wink, nudge nudge).

So, sort of but not quite a SWET conference i.e. SWETish (with its eponymous Twitter hashtag #SWETish).

The whole thing took place at the Vadstena Abbey Hotel, which is made up of a beautiful set of buildings, some dating back to the 12th, 13th and 14th century. From an organizer standpoint, I can certainly recommend this venue. Nice staff, cozy environment and above average food. And a nice historic atmosphere too of course. (Click the link in the tweet below for a couple of snapshots.)

When I sent out the initial invitations to this peer conference, I had my mind set on getting a total of 15 participants, as that seemed to be a good number of people to ensure that all speakers get plentiful of questions and that there would be a good mix of experiences and viewpoints, while at the same time not being too many people so that everybody gets to participate thoroughly and nobody is forced to sit quiet for too long. However, because a few people who had initially signed up couldn’t make it there in the end, we turned into a group of “only” 10 people. Turns out that’s an excellent number! Most if not all of us there agreed that the low number of participants helped create an environment where everybody got relaxed with each other really quickly which in turn helped when discussions and questions got more critical or pointed, without destroying the mood or productivity of those conversations.

Another pleasant surprise was that we only got through (almost) three presentations + Open Season (facilitated Q&A) during the conference (1,5 days). If memory serves, the average at my past peer conferences is four and sometimes we even start a fifth presentation and Q&A before time runs out. What I liked about us only getting through three is that that is a testament to how talkative and inquisitive the group was, even though 5 out of 10 participants were at their first ever peer conference! I facilitated the first presentation myself and so I can tell you that in that session alone we had 11 unique discussion threads (green cards) and 48 follow-up questions (yellow cards), plus quite a few legit red cards. So for those of you familiar with the k-cards facilitation system, you can tell that this wasn’t a quiet group who only wanted to listen to others speak. Which is great, because that’s the very thing that makes peer conferences so fantastically rewarding.

2016-11-19-09-53-39
2016-11-19-11-09-22

15094441_10153914893765741_4654744730839257377_n
2016-11-19-09-22-05

Apart from the facilitated LAWST-style sessions, we also spent 1 hour on Lightning Talks, to make sure that everyone got to have a few minutes of “stage time” to present something of their own choosing.

The evening was spent chatting around the dinner table, in the SPA and in smaller groups throughout the venue until well past midnight. And even though we’d spent a full day talking about testing, most of the conversations were still about testing! How awesome is that?

img_20161119_212822

If you want to read more about what was actually said during the conference, I suggest you check out the Twitter hashtag feed, or read Erik Brickarp’s report that goes more into the content side of things. This blog post is/was more about evangelizing about the concept itself and provide some reflections from an organizer perspective. Maybe I should have mentioned that at the start? Oops.

A peer conference is made possible by the active participation of each and every member of the conference, and as such, credit for all resulting material, including this blog post, goes to the entire group. Namely, and in alphabetical order:

  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson
2016-11-20-13-19-34

Thank you to all the participants, including the few of you who wanted to be there but couldn’t for reasons outside of your control. Next time! And thank you to my partners in crime in the organizing committee: Erik Brickarp, Anna Elmsjö and Björn Kinell.

There! You’ve now reached the end of my triennial blog post. See you in another three years! Actually, hopefully I’ll see you much sooner. The powerful dip in my blogging frequency has partly been due to the continuous deployment of new family members in recent years, which has forced me to cut back on more than one extracurricular activity.

Post below in the comments section if you have comments or questions about peer conferences, or want some help organizing one. I’d be happy to point you in the right direction!

Report from SWET4

The 4th SWET (Swedish Workshop on Exploratory Testing) happened this past weekend at Kilsbergen in Örebro, Sweden. The theme for this 4th edition of the workshop was “Exploratory Testing and Models”, hosted by Rikard Edgren, Tobbe Ryber and Henrik Emilsson (thanks guys!). If you haven’t heard of SWET before, a brief way of describing it would be to say that it’s a peer conference based on the LAWST format where we meet to discuss the ins and outs of Exploratory Testing in order to challenge each other and increase our own understanding of the topic. SWET has many siblings around the world and the family of peer conferences on software testing keeps on growing which is a delightful thing to see! Peer conferences rock. There’s no better way to learn new things about your craft in my mind, than to present an experience report and have it picked apart and challenged by your peers.

Friday (Pre-conference)
Most people arrived on the evening before and spent a couple of hours together eating dinner and chatting over a few drinks. The venue had a lovely common room with a cozy fireplace and comfy chairs so, as usual at these events, several people stayed up chatting happily well into the night without a care.

Saturday (Day 1)
The conference started off with a moment of silence for our friend and testing peer Ola Hyltén who recently passed away in a tragic car accident. Having met Ola myself for the first time at SWET2, that felt like an appropriate way of opening the conference. Then after a round of check-ins the schedule proceeded with the first experience report.

First up was Anna Elmsjö who talked about making use of business and process models. Anna described her process of questioning the diagrams and adding questions and information to the model to keep track of things she wanted to test. Open season contained an interesting thread about requirements where someone stated that it sounded as if Anna’s testing could be seen as a way of adding or sneaking in new requirements, or that someone might feel that she was. A comment on that question pointed out in turn that asking questions about the product doesn’t mean requirements are being added, but that they are being discovered, which is an important distinction to keep in mind in my opinion.

Second presentation was from Maria Kedemo, who talked about what she called model-based exploratory interviewing for hiring testers. Maria works as a test manager and has been heavily involved in recruiting for her employer during the past year. When preparing for the process of hiring, Maria explained, she drew on her testing experiences to see if she could identify some of her habits and skills as a tester and apply them to interviewing, e.g. different ways of searching for and finding new information. My take-aways include some thoughts on how modeling what you already have can help you find out what you really need (not just you want, or think you want). Also, a reaffirmation of the importance of updating your models as your understanding of what you’re modeling increases, sort of like how you would (hopefully) update a plan when reality changes.

Last presentation of the day, Saam Koroorian talked about using the system map, which is a model of the system, to drive testing. He also described how his organization has moved from what he called an activity or artifact-driven kind of testing to more information-driven testing. I interpreted these labels more as descriptors of how the surrounding organization would view testing. Either it’s viewed as an activity that is supposed to provide arbitrary measurements based on artifacts (like test cases) to show some kind of (false) progress, i.e. bad testing, or it’s viewed as an activity that is expected to provide information, i.e. better testing (or simply “testing”).

Saam continued to talk about how his team had adopted James Bach’s low-tech testing dashboard concepts of assessing and showing coverage levels and testing effort of different areas which led to many new green cards (new discussion threads). Among them was a thread of mine that discussed the importance of taking the time dimension into account and how to visualize “freshness” and reliability of information as time passes (assuming the system changes over time). This is something I’ve recently discussed with some other colleagues to solve a similar problem at a client which I found very stimulating. Might turn that into a blog post of its own one day (when the solution is finished).

Saam also noted that as his organization was moving towards an agile transition at the time, sneaking in new thinking and ideas in the testing domain was easier than usual, since the organization was already primed for change in general. Interesting strategy. Whatever works. 🙂

Lightning Talks
Day 1 was concluded  with a 60-minute round of lightning talks, which based on the number of speakers meant that each person got 5 minutes to run their presentations (including questions). Lots of interesting topics in rapid progression,  like an example of how to use free tools to create cheap throw-away test scripts as an exploration aid (James Bach) or how to use the HTSM quality characteristics to discuss quality with customers and figure out their priorities (Sigge Birgisson). Erik Brickarp gave Lightning Talk on visualization that he’s now turned into a blog post over at his blog. My own Lightning Talk was about helping testers break stale mental models and to get out of creative ruts through mix-up testing activities (a.k.a cross-team testing). If I’m not mistaken, I think all participants gave a Lightning Talk if they weren’t already scheduled to give a presentation which was nice. That way everybody got to share at least one or two of their ideas and experiences.

In the evening, the group shared a rather fantastic “Black Rock” dinner after which the discussions continued well into the wee hours of the night, despite my best efforts to get to bed at a reasonable hour for once.

Sunday (Day 2)
After check-in on day 2, the first order of business was to continue through the stack of remaining threads from Saam’s talk that we didn’t have time to get to the day before. I think this is a pretty awesome part of this conference format. Discussions continue until the topic is exhausted, even if we have to continue the following day. There’s no escape. 😉

The first (and only, as it turned out) presentation of day 2 came from James Bach who told a story about how he had done exploratory modeling of a class 3 medical device through the use of its low level design specification to come up with a basis for his subsequent test design. During open season we also got into a lot more information about his overarching test strategy. It was a fascinating story that I won’t go into much detail on here, but you should ask him to tell it to you if you get a chance. You’ll get a lot of aha! moments. My biggest takeaway from that open season discussions was a reaffirmation of something I’ve known for quite some time, but haven’t been able to put into words quite so succinctly: “Formal testing that’s any good is always based on informal testing”. Also worth considering: Informal testing is based in play. As is learning.

Formal testing is like the opening night of a big show. It becomes a success because(/if) it’s been rehearsed. And informal testing provides that rehearsal. Skip rehearsing at your peril.

So how do you go from playing into making formal models? You practice! And according to James, a good way to practice is to start by drawing state models of various systems. Like for instance this über-awesome Flash game. When you’ve modeled the game, you can start to play around with it in order to start generating a rich set of test ideas. Asking “what if”-style questions like “What happens if I go from here to here?” or “I seem to be able to do this action over here, I wonder if I can do it over here as well?” and so on. What factors exist, what factors can exist, which factors matter?

I want to finish off with a final couple of quick take-aways from the weekend. First, a “test case” can be defined as an instance or variation of a test or test idea. By using that definition you’ll be able to encompass many or most of the varying things and containers that people call test cases. And finally, regarding requirements… Challenge the assumption that tests can be derived from the requirements. The tests aren’t in the requirements and thus can’t be derived from them. You can however, construct tests that are relevant in order to test the requirements and obtain information about the product, usually based on risk. While on the subject remember that, usually, requirements > requirements document.

SWET4

Thank you to all the participants at SWET4: Anna Elmsjö, Simon Morley, Tobbe Ryber, Oscar Cosmo, Erik Brickarp, James Bach, Johan Jonasson, Sigge Birgisson, Maria Kedemo, Rikard Edgren, Joakim Thorsten, Martin Jansson, Saam Koroorian, Sandra Camilovic and Henrik Emilsson.

That’s it. That’s all that happened. (No, not really, but I’ll have to save some things for later posts!)