EAST meetup #7

Last night, EAST (the local testing community in Linköping) had its 7th “official” meetup (not counting summer pub crawls and the improvised restaurant meetup earlier this fall). A whopping 15 people from opted to prolong their workday by a few hours and gather to talk about testing in inside Ericsson’s facilities in Mjärdevi (hosting this time, thanks to Erik Brickarp). Here’s a short account of what went down.

First presentation of the night was me talking about the past summer’s CAST conference and my experiences from that. The main point of the presentation was to give people who didn’t know about CAST before an idea of what makes CAST different from “other conferences” and why it might be worth considering attending from a professional development standpoint. CAST is the conference of the Association for Software Testing. A non-profit organization with a community made up lots of cool people and thinking testers. That alone usually makes the conference worth attending. But, naturally I’m a bit biased.

If you want to know more about CAST, you can find some general information on the AST web and CAST 2012 in particular has been blogged about by several people, including myself.

Second presentation was from Victoria Jonsson and Jakob Bernhard who gave their experience report from the course “The Whole Team Approach to Agile Testing” with Janet Gregory that they had attended a couple of months ago in Gothenburg.

There were a couple of broad topics covered. All had a hint of the agile testing school to them, but from the presentation and discussions that followed, I got the impression that the “rules” had been delivered as good rather than best practices, with a refreshingly familiar touch of “it depends”. A couple of the main topics (as I understood them) were:

  • Test automation is mandatory for agile development
    • Gives more time for testers to do deeper manual testing and focus on what they do best (explore).
    • Having releases often is not possible without an automated regression test suite.
    • Think of automated tests as living documentation.
  • Acceptance Testing could/should drive development
    • Helps formulating the “why”.
    • [Comment from the room]: Through discussion, it also helps with clarifying what we mean by e.g. “log in” in a requirement like “User should be able to log in”.
  • Push tests “lower” and “earlier”
    • Aim to support the development instead of breaking the product [at least early on, was my interpretation].
    • [Discussion in the room]: This doesn’t mean that critical thinking has to be turned off while supporting the team. Instead of breaking the product, transfer the critical thinking elsewhere e.g. the requirements/user stories and analyze critically, asking “what if” questions.
    • Unit tests should take care of task level testing, Acceptance tests handles story level testing and GUI-tests should live on a feature level. [Personally, and that was also the reaction of some people in the room, this sounds a bit simplified. Might not be meant to be taken literally.]

There was also a discussion about test driven development and some suggestions of good practices came up, like for instance how testers on agile teams should start a sprint by discussing test ideas with the programmer(s), outlining the initial test plan for them. That way, the programmer(s) can use those ideas, together with their own unit tests, as checks to drive their design and potentially prevent both low and high level bugs in the process. In effect, this might also help the tester receive “working software” that is able to withstand more sapient exploratory testing and the discussion process itself also helps to remove confusion and assumptions surrounding the requirements that might differ between team members. Yep, communication is good.

All in all, a very pleasant meetup. If you’re tester working in the region (or if you’re willing to travel) and want to join for the next meetup, drop me an e-mail or comment here on the blog and I’ll provide information and give you a heads up when the next date is scheduled.

Eulogy for Ola

This past Wednesday, our community was reached by the sad news that our friend Ola Hyltén had passed away. I was out of the country when I first heard the news, and though I’m back home now, I’m still having trouble coming to terms with the fact. I wasn’t planning on writing anything about this at first, and several others have already expressed their feelings better than I could hope to do in blogs, comments and tweets. But now I’m thinking that I might write a few short lines anyway, mostly for my own sake, in hope that it might provide some cathartic effect.

I didn’t know Ola up until a couple of years ago. I had heard of him before that, but it wasn’t until SWET2 that I got to meet him in person. I found him to be a very likable guy who got along well with everybody and who always seemed to be just a second away from saying something funny or burst out laughing himself. After SWET2, I kept running in to Ola every now and then, for instance at a local pub gathering for testers down in Malmö and later at SWET3. However, it was during our time together on the Let’s Test conference committee, leading up to Let’s Test 2012, that I really got to know him. Ola always seemed to be easygoing, even when things were going against him and it was easy, even for an introvert like myself, to slip into long and entertaining conversations with him about everything and nothing.

I considered Ola to be one of the more influential Swedish testers in recent years, and it’s an influence that we as a community will surely miss having around. We’ve lost a friend, a great conversationalist and a valued colleague and I know it will take a lot of time still before that fact truly sinks in for me.

My condolences goes out to Ola’s family and close friends as they go through this difficult time.

More eulogies for Ola can be found here, here, here and here.

I’m a sucker for analogies

I love analogies. I learn a lot from them and I use them a lot myself to teach others about different things. Sure, even a good analogy is not the same as evidence of something, and if  taken too far, analogies can probably do more harm than good (e.g. “The software industry is a lot like the manufacturing industry, because… <insert far fetched similarity of choice>”). However, I find that the main value of analogies is not that they teach us “truths”, but rather that they help us think about problems from different angles, or help illustrate thinking behind new ideas.

I came across such an analogy this morning in a mail list discussion about regression testing. One participant offered a new way of thinking about the perceived problem of keeping old regression tests updated, in this way: “Pause for a moment and ask… why should maintenance of old tests be happening at all? […] To put it another way, why ask old questions again? We don’t give spelling tests to college students […]”

I like that analogy – spelling tests to college students. If our software has matured past a certain point, then why should we go out of our way to keep checking that same old, unchanged functionality in the same way as we’ve done a hundred times before? Still, the point was not “stop asking old questions”, but rather an encouragement to examine our motivations and think about possible alternatives.

A reply in that same thread made a point that their regression tests were more like blood tests than like spelling tests. The analogy there: Just because a patient “passes” a blood test today, doesn’t mean it’s pointless for the physician to draw blood on the patient’s next visit. Even if the process of drawing blood is the same every time, the physician can choose to screen for a single problem, or multiple problems, based on symptoms or claims made by the patient. Sort of like how a tester can follow the same path through a program twice but vary the data.

So what does this teach us about testing? Again, analogies rarely teach us any hard truths, but they serve as useful stimuli and help us think from new angles. I use them as I use any other heuristic methods. So with this spelling test/blood test analogy in mind, I start to  think about the test ideas I have lined up for the coming few days at work. Are most of them going to be like spelling tests and if so, can I still make a good argument for why those would be the best use of my time? Or are there a few ideas in there that could work like blood tests? If so, what qualifies them as such and can I improve their screening capability even further in some way (e.g. vary the data)?

Like I said earlier, I came across this analogy just this morning, which means I’m probably not really done thinking about it myself yet, but I thought it worth sharing nonetheless. Much like cookies, sometimes a half-baked thought is even better than the real thing. Or at least better than no cookie at all. So here it is. And with that analogy, or maybe with this one below, I bid you a good day.

XKCD: Analogies

Report from CAST 2012

This year’s Conference of the Association for Software Testing (CAST) is now in the books. I’m returning home with a head full of semi-digested thoughts and impressions (as well as 273 photos in my camera and an undisclosed number of tax free items in my bag) and will briefly try to summarize a few of them here while I try to get back on Europe time.

The Trip
I’m writing this while on the train heading home on the last leg of this trip. Wow, San Jose sure is far away. Including all the trains, flights and layovers… I’d say it’s taken about 24 hours door-to-door, in each direction. That should tell you a bit about how far me and others are willing to go for solid discussions about testing (and I know there are people with even worse itineraries than that).

The Venue
I arrived at the venue a little over a day in advance in order to have some time to fight off that nasty 9 hour jet lag. Checked in to my room. Then immediately switched rooms since the previous guest had forgotten to bring his stuff out, though the hotel’s computer said that the room had been vacated. Still got my bug magnetism in working order apparently.

CAST was held at the Holiday Inn San Jose Airport this year. The place was nice enough. Nothing spectacular, but it did the job. The hotel food was decent and the coffee sucked as badly as ever. Which I expected it would, but… there were no coffee shops within a couple of miles as far as I could tell(!) I’m strongly considering bringing my own java the next time I leave Europe. It’s either that or I’ll have to start sleeping more, which just doesn’t work for me at any testing event.

The Program
I’m not going to comment much on the program itself since I helped put it together. Just wouldn’t make sense since I’d be too biased. I’m sure there will be a number of other CAST blog posts out there soon that will go more in depth (check through my blogroll down in the right hand sidebar for instance). I’ll just say that I got to attend a couple of cool talks on the first day of the conference. One of them was with Paul Holland who talked about his experiences with interviewing testers and the methods he’s been using successfully for the past 100+ interviews. Something I’m very interested in myself. I actually enjoy interviews, from both sides of the table.

The second day I got “stuck” (voluntarily) in a breakout room right after the first morning session. A breakout room is something we use at CAST when a discussion after a session takes too long and there are other speakers who need the room. Rather than stopping a good discussion, we move it to a different room and keep at it as long as it makes sense and the participants have the energy for it. Anyway, this particular breakout featured myself and two or three others who wanted to continue discussing with Cem Kaner after his presentation on Software Metrics. We kept at it up until lunch and after that I was kind of spent, so I opted to “help out” (a.k.a take up space) behind the registration desk for the rest of the day. Which was fun too!

The third day was made up of a number of full day tutorials. I didn’t participate in any of them though, so again you’ll have to check other blogs (or #CAST2012 on Twitter) to catch impressions from them.

Facilitation
CAST makes use of facilitated discussions after each session or keynote. At least one third of the allotted time for any speaker is reserved for discussions. This year I volunteered to facilitate a couple of sessions. I ended up facilitating a few talks in the Emerging Topics track (short talks) as well as a double session workshop. It was interesting, but I think I need to sign-up for more actual sessions next year to really get a good feel for it (Emerging Topics didn’t have a big audience when I was there and the workshop didn’t need much in way of facilitation).

San Jose / San Francisco
We also had time to see a little bit of both San Jose and San Francisco on this trip, which was nice. I only got to downtown San Jose on the Sunday leading up to the conference, so naturally things were a bit quiet. I guess it’s not like that every day of the week(?)

San Francisco turned out to be an interesting place with sharp contrasts. The Mission district, Market Square and Fisherman’s Wharf all had their own personalities and some good and bad things to them. Anyway, good food, nice drinks and good company together with a few other testers can make any place a nice place.

Summary
As with CAST every year, it’s the company of thoughtful, engaged testers that makes CAST great. If you treat it like any other conference and just go to the sessions and then go back to your room without engaging with the rest of the crowd at any point during the day (or night), then I’m afraid you’ll miss out on much of the Good Stuff. Instead, partake in hallway hangouts, late night testing games, informal discussions off in a corner, test your skill in the TestLab with James Lyndsay or join one of the AST’s SIG meetings. That’s when the real fun usually comes out for me. And this year was no exception.

Going to CAST

Next week I’ll be at the Conference of the Association for Software Testing (CAST) in San Jose, CA. The first time I attended CAST in 2009, it quickly became my yearly top priority among conferences to attend. This is a “CONFERence” type conference (high emphasis on community and discussions) which usually produces a lot of blog worthy material for its attendees. I will try to write a couple of brief blog entries while at the conference, but if you want to find out what’s being discussed in “real time”, then tune in to #CAST2012 on Twitter, or check out the live webCAST.

If you’re a regular reader of this (little over a month old) blog, then you know that CAST was one of the inspirations behind the recent Let’s Test conference. CAST has always been a great experience for me and this year’s CAST will be my 4th. So far I have gone home every time with my head filled with new ideas and interesting discussions lingering in my head, waiting to be processed over the following few weeks, and I think this year will be no exception.

This year’s CAST will be the first where I’m taking part in the program committee (together with Anne-Marie Charrett, Sherry Heinze and program chair Fiona Charles) and so I’ve been reading through and evaluating a wide range of great proposals for the workshops and sessions that will make up the first two days of the conference, trying to help put together a really exiting program for this year’s theme, “The Thinking Tester”.

I’ll also be facilitating a few of the workshop and session discussions this year, which will be interesting. In Sweden we’re used to going to conferences to “learn” from the speaker and everybody take turns to ask their questions in a polite (read: timidly) and orderly fashion , much like we do when queuing at the supermarket or movie theater, Swedish style. At CAST on the other hand, it’s not uncommon for the speaker’s message to be challenged and/or questioned thoroughly. Needless to say, to get a discussion of that kind to flow effectively without derailing, good facilitation is key. Facilitation also enables other things, like making sure that more than one or two people get to talk during the Q&A or that discussions stay on topic. I like both that kind of attitude and format, and although I’ve already taken the stage as a speaker at CAST in the past, this will be my first time facilitating “over on that side of the pond”. So yeah, it will be an interesting experience for me for sure.

Looking forward to going there, meeting old friends, listening to interesting talks, facilitating discussions, blogging about it… Looking forward to it all!

Finally, those with a keen eye might have noticed that the headline of this blog has changed recently. The reason is simple… When I resurrected this blog last month, I just put the first thing that came to mind as the headline. Turns out, the first thing that came to mind was the exact same headline as Shmuel Gershon uses on his (well established and well worth reading) testing blog. We can’t have that. Huib Schoots was kind enough to point this out in his most recent blog post, titled “15 test bloggers you haven’t heard about, but you should…“, where incidentally, I’m one of the 15. Most of the other blogs on that list are real gems, by the way. One or two I haven’t heard about myself, so I’ll check them out this summer for sure.

Re: Adaptability vs Context-Driven

A couple of days ago, Huib Schoots published a very interesting blog post titled “Adaptability vs Context-Driven“, as part of an ongoing discussion between himself and Rik Marselis. This blog post represents my initial reaction to that discussion.

The long and short of it all seems to be about whether using a test framework, like TMap, combined with being adaptable and perceptive, is similar to (or the same as) being context-driven?

To me the answer is… no. In fact, I believe TMap and the context-driven school of thought live on opposite ends of the spectrum.

Context-driven testers choose every single aspect of how to conduct their testing by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. It starts with the context, not a toolbox or a ready-made, prescriptive process.

TMap and other factory methods seem to start with the toolbox and then proceed to remove whatever parts of the toolbox that doesn’t fit the context (“picking the cherries” as it’s referred to in Huib and Rik’s exchange). At least that’s how I’ve seen it used when it’s been used relatively well. More often than not however, I’ve worked with (well-intentioned) factory testers who refused to remove what didn’t fit the context, and instead advocated changing the context to fit the standardized process or templates. So, context-imperial or mildly context-aware at best. Context-driven? Not in the slightest.

When I’m faced with any testing problem, I prefer to start with the context and then build my strategy from the ground up; testing the strategy as I’m building it while making as few assumptions as possible about what will solve the problem beforehand. I value strategizing incrementally together with stakeholders over drawing up extensive, fragile test plans by using prescriptive templates that limit everybody’s thinking.

I’m not saying that “cherries” can’t be found in almost any test framework. But why would I limit myself to looking for cherries in only a single cherry tree, when there’s a whole garden of fruit trees available all around us? Or is that forbidden fruit…? (Yes, I’m looking at you, ISO/IEC 29119.)

Well, now that’s surely a can of worms for another time. To be continued.

If you haven’t already read Huib’s post that I referred to in the beginning, then I suggest you do that now.

Thank you Huib and Rik for starting this discussion and for making it public. Testers need to engage in more honest exchanges like this.

Let’s Test – in retrospect

What just happened? Was I just part of the first-ever European conference on context-driven software testing? It feels like it was only yesterday that was still thinking “this will never happen”, but it happened, and it’s already been over a month now since it did. So maybe it’s time for a quick (sort of) retrospective? Let’s see, where do I begin…?

Almost a year ago, I did something I rarely do. I made a promise. The reason I rarely make promises is because I’m lousy at following a plan and with many if not most promises, there’s planning involved… So making a promise would force me to both make a plan and then follow it. Impossible.

And yet, almost a year ago now, I found myself at the CAST conference in Seattle, standing in front of 200+ people (and another couple of hundred people listening in via webcast I’ve been told) and telling the audience that me and some other people from Sweden were going to put on a conference on context-driven testing in 2012 and that it would be just like CAST, only in Europe. And of course we had it all planned out and ready to be launched! Right…? Well… not… really…

At that point we didn’t have a date set, no venue contract in place, no program that we could market, no funding, no facilitators – heck, we didn’t even really have a proper project team. The people who had been discussing this up until now had only started talking about organizing a conference at the 2nd SWET workshop on exploratory testing in Sweden a couple of months earlier. In my mind, it was all still only on a “Yeah, that would be a neat thing to pull off! We should do that!” level of planning or committment from anyone. At least as far as I was concerned. The other guys might tell you that they had made up their minds long before this, but I don’t think I had.

Anyway, since I was elected (sort of) to go ahead and announce our “plan” (sort of), I guess this is the point were I made up my mind to be a part of what we later named “Let’s Test – The Context-Driven Way” and over the next couple of months we actually got a project team together and became more or less ready to take on what we had already promised (sort of) to do.

Fast forward a couple of months more. So now we have that committed team of 5 people in place, working from 5 different locations around the country (distributed teams, yay!). We have an awesome website, a Twitter account, a shared project Dropbox and some other boring back office stuff in place. The team members are all testers by trade, ready to crete a conference that is truely “by testers, for testers”. Done. What more do we need? Turns out, a conference program is pretty high up on the “must have” list for a conference. Yeah, we should get on that…

I think that this was the point where I started to realize just how much support this idea had out there in the context-driven testing community already. Scott Barber, Michael Bolton and Rob Sabourin were three of our earliest “big name” supporters who had heard our annoucement at CAST, and many testers from the different European testing communities were also cheering for the idea early on, offering support. A bunch of fabulous tutorial teachers and many fantastic testing thinkers and speakers from (literally) all over the world, who we never dreamed would come all the way to Sweden, also accepted our invitations early on. Our call for papers (that I at first feared wouldn’t get many submissions since we were a first-time conference) also rendered a superb yield of excellent proposals. So much so that it was almost impossible to only pick a limited number to put on the program.

So while I can say in retrospect that creating a conference program is no small task, it is a heck of a lot easier when you get as awesome a repsonse and support from the community as we’ve gotten throughout this past year. It did not go unnoticed folks!

After we got the program in place, I was still a bit nervous about the venue and residential conference format. Would people actually like to come to this relatively remote venue and stay there for three days and nights, while basically doing nothing else but talk about testing, or would they become bored and long for a night on the town? I had to remind myself of the reasons we decided to go down this route in the first place: CAST and SWET.

CAST is the annual “Conference of the Association for Sotware Testing” which uses a facilitated discussion format developed through the LAWST workshops. People who come to CAST usually leave saying it’s been one of their best conference experiences ever, in large parts due to (I believe) this format with facilitated discussions after each and every presentation. We borrowed this format for Let’s Test, and with the help of the Association for Software Testing (AST) we were able to bring in CAST head facilitator Paul Holland to offer facilitaiton training to a bunch of brilliant volunteers. Awesome.

SWET is the “Swedish Workshop on Exploratory Testing”, which is a small-scale peer workshop that also uses the LAWST style discussion format. But what makes this sort of gathering different from most regular conferences is that the people who come to the workshop all stay at the same location as the workshop is being held, for one or two consecutive days and nights. So after the workshop has concluded for the day, discussions still don’t stop. People at SWET stay up late and continue to share and debate ideas well into the night, at times using the sunrise as their only cue to get to bed. I believe one of the main reasons for this is… because they can. They don’t have to catch a bus or a cab to go back to their hotel(s) and when given the opportunity to stay up late and talk shop with other people who are as turned on by software testing as they are, they take it. We wanted to see this made possible for about ten times as many people as we usually see at SWET as well. Hence the residential format and extensive evening program at Let’s Test, which I believe is a fairly unusual if not unique format for a conference of this size. At least in our neck of the woods.

In the end, I personally think we were able to offer a nice blend of these two conference models that had inspired us. People weren’t forced to enter into discussions after sessions, but they were always able and encouraged to participate, and in a structured manner (great job all facilitators!). Also, people could choose to go to bed early and recharge their batteries after a long day of conferencing, or they could opt-in for either high energy test lab activities, or a more mellow and laid back art tour around the venue campus (to name but a couple of the well attended evening activities) before heading for the bar. I think I managed to get to bed at around 02.00 AM each night, but I know that some folks stayed up talking for a couple of hours beyond that each night too.

Wrapping up this little retrospective, I’d like to say thank you to our sponsors who, among other things, helped make the evening events such a well appreciated part of the conference experience and who all really engaged actively in the conference, which was something we as organizers really appreciated. Finally, a special shout out to the very professional Runö venue crew and kitchen staff who readily helped us out whenever we needed it. You made the execution of this event a total joy.

I’m very happy about how Let’s Test turned out. It exceeded my own expectations for sure. Judging by the feedback we saw on Twitter during the event, and in the blogosphere afterwards, I’d say it looks like most who attended were pretty ok with the experience as well. Check out the blog links we’ve gathered on the Let’s Test 2012 Recap page and judge for yourselves. Seriously, it’s been extremely rewarding to read through all these blog posts. Thank you for that.

Plans are already well underway for next year’s conference. We’re delighted that both James Bach and Johanna Rothman have signed on to be two of our keynote speakers and we’ll announce a call for proposals sometime after the summer for sure and I encourage all of you who sent something in last year to do so again. Oh, and you can sign up right now for Let’s Test 2013 and catch the advantageous first responder rate. A bunch of people already have, so you’ll be in good company.

One final thing… We know a good deal about what people liked at Let’s Test 2012, but no doubt there are also a few things that we can and should improve. Let us know.

It’s been a pleasure. See you all there next year I hope!

Thinking Visually

Today I finally got around to watching Alan Richardson’s STARonline talk “Thinking Visually In Software Testing” that has recently been published on YouTube for easy access (also embedded at the bottom of this post).

I’m always interested in learning new ways of visualizing information and communicate thoughts in effective ways and so Thinking Visually is a topic that’s right up my alley. Alan’s talk is well worth a watch/listen and if you don’t believe me, I took some quick notes while watching it to help you decide if it’s worth your time or not. (Hint: It is.)

Described in one sentence, the talk is about using models and diagrams to aid test planning and communication of the testing effort. It covers an explanation of what Alan means by “thinking visually” but it also describes the opposite of thinking visually and also contains a very amusing part with examples of how to best “trap” your thinking and how to best document your trapped thinking so that your readers will gain no value from reading your documentation. Hilarious. Also, as you listen to Alan’s examples of trapped thinking being presented in your average test plan or report, you will probably realize that you see this kind of documentation quite often.

I do recommend that you listen from the beginning of the talk, but if you want to hear what I’m talking right away, you can skip ahead to about 16:48 for a good laugh. That is, until you also realize that some of this stuff is something you yourself have been doing or maybe are still doing quite often. At least that’s what I realized. Alan suggests that we go through your own documents and read it with a sense of humor, which will help us spot these things more easily, and maybe also help us stop doing them.

But… going back to the beginning (how’s that for structure), one thing that Alan said early on was something that got me thinking about how I approach documentation and debriefings:

“I would rather see your thinking, than see what you think your thinking should look like.”

In other words, the way you are presenting your test strategy, test ideas or test results, should demonstrate that you’re putting more effort into the thought process than you are into the documentation process. So, focus on showing that you are thinking and that you are thinking about the testing task at hand, rather than presenting something that suggests you were focused on thinking: “How can I fill in this template”?

“If you don’t think in the first place. If you don’t have a model about what you’re going to do, your communication will be unclear, and you won’t convince, regardless of how well you fill in the template.”

I personally like to see more test plans focus on showing me that the tester is thinking, rather than focusing on exactly what they are thinking. Why? Well, test plans are usually born before testing starts, at a time when we know the least we’ll ever know about the thing we’re actually going to test. So if I’m one of your stakeholders and you show me a plan that tells me exactly what you’re going to do and that you have it all figured out… then the only thing I know for sure is what your testing will not be like, because no test plan fully escapes first contact with the product unscathed.

But, if you can show me that you are thinking actively, from many different angles, and that your thinking is open to a changing context, then I will feel more assured that you will do a good job once you get your hands on the product. I don’t want testers who can follow directions. I want testers who can think for themselves.

Ok, back to the presentation. Alan shares a few of his principles for how to approach documentation (somewhat paraphrased):

  • How little can you get away with?
  • Make the templates work for you, not the other way around
  • Put important information first. Make what’s important obvious
  • Summarize for the reader
  • Meet the reader’s needs

I’m running a bit long with this post, but it turns out that this was a very quotable talk, so I’ll leave you with a few “sound bites” that I took away from listening to this talk, that might trigger you to go and watch the whole 25 minutes or so of it yourself.

I learned that communication is not what I give, it’s what people take from what I present. So I have to help them take it in the way that I want […] to focus on what’s important.
– – –
When you create a mind map your default relationship is a parent/child decomposition, but there are other relationships in your model and you may need different visual models to draw that out.
– – –
Different tools support different styles of thinking. You get different value when you model your thought process in different tools.
– – –
Don’t think that you can get away without thinking.

EAST meetup #6

About 6 months ago, a few testing peers in Linköping, Sweden, started  a local competence network group that we named “EAST”. The name itself doesn’t mean anything, but suggests that we reside in the south east parts of Sweden and that we’re welcoming people from all around these parts, not just our little town.

The idea was to get people from different organizations and companies together to talk about test and to help each other learn more about testing by sharing knowledge and experiences with each other.

So this past Monday I attended the 6th meetup with the group. We’ve gone through a couple of meetings in the early stages where we got to know each other and talked about what we all do at our respective companies in terms of testing. By the 3rd or 4th meetup, we were starting to have more prepared themes for each meetup and this time we actually had two separate themes prepared.

First, we got to listen to Hanna Germundsson who presented a software testing thesis she’s working on, geared towards test processes and minimizing testing related risks. We got to ask questions and also had time for some open conversations in the group about the different questions Hanna presented. There will be follow ups to this one for sure. Haven’t seen that many software testing theses before. Very cool.

For the second part of the meetup, a couple of people talked about their experiences from the Let’s Test 2012 conference. While it’s of course fantastic to listen to people talking about this conference that I helped organize as an event in general, it was even more cool to listen to them describing specific take-aways and learnings from the different tutorials and sessions they attended. Check out the Let’s Test 2012 archives page for slides and other material related to the conference.

EAST will take a break over the summer, but we’ll be back in force this fall. If you live nearby and want to participate then I suggest you join the EAST LinkedIn group to stay in touch with news and announcements. Or follow @test_EAST on Twitter. Or both.

The little blog that could…

People sometimes start blogs, write a few posts and then they die away. I’ve taken a slightly different route. I started this blog in June of 2007 and has since then not written a single post. Not one… Until now… So now that I’ve decided to resurrect it 5 years later, I’m curious to see if I can keep it alive. I suspect that before to long I’ll start treating it like I treat my Twitter account. That is, long periods of nothing followed by days of super intense posting every now and then (usually when at workshops, conferences or other gatherings).

Some of you finding your way here already know me as a software tester. That’s good, because that’s what I intend to blog about here, so you don’t have to spend time getting to know me a second time around. If you know me as anything other than a software tester then you probably won’t get much out of following me through this forum. But I’ll still be your friend on Facebook, Twitter, LinkedIn, Entaggle or some other social media if you’d like. Or maybe even in *gasp* real life.

That’s it for now. First post done. There will be more to follow. Soon.

Or will there?