Category: Uncategorized

QA: The Misnomer That Never Dies

Quality AssuranceOver the span of my testing career I don’t know how many times I’ve had this conversation, in its many forms and permutations. The good old conversation about testing vs. quality assurance. The conversation we (software testers) have about not being ‘quality assurance’ professionals. That we cannot assure anything, least of all quality. Testers can assure quality as much as a doctor can assure health. Imagine the conversation:

[Patient] So doc, now that I’ve done my yearly checkup, can you guarantee I am entirely healthy?
[Doctor] Well, you are healthy, and we checked for all of the most common sicknesses and diseases for someone your age, gender, background. However I cannot guarantee there is nothing else wrong somewhere we didn’t check.
[Patient] But you said you were doing a thorough checkup.
[Doctor] And I did.

You get the idea. The doctor takes what we testers may call a ‘risk-based’ approach to our yearly checkups. He takes into account our family history, background, gender, age, habits and makes a decision on which exams we need, as he couldn’t possibly ask us to do all the available medical exams out there. That could be seen as a waste of time and money.

The same goes for testing. Only that the conversation happens between the Software Tester and the Project Manager, or the Product Owner, or the Dev Lead, or whomever else. I believe that one reason that this still happens is because some testers still insist in calling themselves ‘Quality Assurance’ professionals. How can we possibly not assure quality if that is precisely what our job titles say? I’ve had this conversation countless times with ‘QA’ people everywhere. There is a feeling that being called a ‘tester’ is demeaning, or less important than a QA. A Quality Assurance Engineer once told me ‘but I do so much more than just test’. That may be true, but you still do NOT ensure quality. You just can’t. This is not a new topic, Michael Bolton wrote his popular blog post Testers Get Out of the Quality Assurance Business* over 5 years ago, and it was not a new topic then.

The fact that we, as a community, are still talking about this has to mean something. I’m not sure what yet, but it has to. It could mean that we are talking about this to the same people over and over, or if we are talking to new people about it, the message must not be getting across because by now we’d have made an impact. Maybe we have made an impact, but I just don’t feel it. Confirmation bias? Maybe.

So what can we do? How can we move on to bigger and better conversations? How can we impact the still large part of our industry that believes they can, and do ensure quality? How can we kill and bury this misnomer once and for all? I wish I had the answer to that, but I don’t. I do, however have some suggestions and some things I have been trying:

Keep talking about it

We need to continue talking about the foolishness of the term Quality Assurance when referring to Software Testers. It is not only inaccurate, it is irresponsible. A physician would never call themselves a ‘Health Assurance Professional’. One dictionary definition of ‘doctor’ is a qualified practitioner of medicine. As we are qualified practitioners of software testing, investigation, reporting and communication.

Understand what software testing is [and is not]

There are still so many testers out there that are unable to articulate what they do. I interview a lot of testers, and the answers to questions such as: ‘what is testing?’ or ‘what is the most important skill a tester can have?’ or ‘what does a tester do?’ range from unsatisfactory to completely pitiful. There are a sea of free information about testing out there, read it. And I’m not talking about LinkedIn forums, don’t go there. I’m talking about blogs such as Satisfice and Developsense. Start with this post, and beware, you will need to read it more than once. Read it until you understand it, until you internalize it. Then move on to read as many other blog posts on those sites as you can.

Talk to other Software Testing professionals

When I started to really understand what testers are supposed to do, when I began learning about the craft of software testing, I was all but confused. I was told by the courses and certifications I had done that I was put in a team to assure quality. One way to better understand and clarify such doubts is by having a conversation with experienced practitioners. Do your research first, then approach one of them. They have helped me [and still do to this day] to grow in understanding of our profession, helped me to be better at articulating concepts, theories and how to practice software testing as a professional. If you don’t know how or who to reach out to, contact me.

Do you find yourself having this conversation too? How to you tackle it? Any suggestions on how to stop this madness once and for all?

*BTW, if you haven’t read this post yet, drop everything you are doing and read it. Now. 

Let’s Test – ER on My First Presentation

This is my third post about Let’s Test Conference in Sweden. In the first I talked about my thoughts still at the venue, just after the conference finished. The second was all about the people. This post is my experience report (ER) on my presentation at Let’s Test, more of a reflection really.

This was my first ever presentation. After many anxious nights preparing my presentation, thinking about what it was going to be like, worrying about not doing a good job – its all over. My first track presentation at a conference is behind me now. But there is still a lot of reflecting I want to do on the experience. This is my first attempt to do so.

Preparing for the journey…

There were a few reasons I couldn’t wait to go to Let’s Test, meeting awesome testers I only knew via Twitter, conferring and exchanging ideas with great minds in testing were at the top of the list, but the reason I couldn’t stop thinking about the conference (the most dreaded reason) was to get a first presentation under my belt. Even though most of the material was about my own experiences, I spent a lot of time preparing for it, including countless hours on Prezi making it look good – maybe as compensation for the fear of not presenting well. I’m glad I did that as it was one less thing to worry about during the presentation.

I was lucky enough to be paired with and amazing presentation mentor via the A Line in the Ladies Room program: Dawn Haynes. Dawn has many years experience as a presenter and teacher so having her help was amazing! Especially when last minute nerves hit me on Sunday after I got to Runo. Thanks Dawn for all your support, it meant the world to me. I had lots of other lovely people supporting me there like Scott Barber, David Greenless, Lee Hawkins, Martin Hynie to name a few, and there were many more lovely people there cheering me on, too many to list here but although that whole day was much of a blur to me, I really appreciate your support.

The moment of truth arrives…

Of course there had to be technical difficulties just before the start – don’t all first timers have that? My computer went to sleep and somehow lost the connection to the projector and we couldn’t make it work. It was a few minutes after the presentation was meant to start that the projector decided to work again. Phew…

So, after the technical issues, I have to admit I was nervous. Very nervous. That is when James Bach enters the room. Needless to say I was even more nervous then! The pressure of having one of the biggest names in testing on my first presentation was huge*. But there I was and there was nothing much I could do – apart from presenting. I have to say tho that jumping out of the window (One Flew Over the Cookoo’s Nest style) did cross my mind more than once.

*Note: James was kind enough to ask me if I was OK with him being there, the day before the presentation, so I had the opportunity to say no, but I thought his feedback would be invaluable so I decided to say yes, and I’m glad I did. He did give me very kind and constructive feedback, for which I’m very grateful.

After I decided that jumping out wasn’t the best idea, I proceeded to present the material I had rehearsed so many times before. I knew what time I should be moving to the next slide not to go over time, but on the day I completely forgot to look at the watch until the second last slide. To my horror I realized that the presentation had only gone for 30 minutes, and I had a 45 minutes presentation – which meant I was going to have to stretch the last two slides for 15 more minutes. Needless to say that didn’t happen, so I finished early. Luckily there were lots of questions so we had a lively discussing for the remainder of the time.

Interestingly enough, I felt much more comfortable during the question time then during the presentation. I think it is because I prefer to interact with people one-on-one, and during open season it felt more personal as I was directing myself to the one person asking the question. There were other takeaways for me (and I’m sure as I process and think about the experience many more will pop up!). Here are some of them:

Takeaways…

  • Preparation is crucial. Prepare, prepare, prepare – when you think you have prepared enough, put the presentation down for a few days and then prepare some more. Try to anticipate the questions that will be asked and ask yourself if you’d know the answer (thanks Dawn Haynes for the tip)
  • We can be our harshest critics. And I don’t think that is an entirely bad thing
  • You don’t need to know all the answers. But you will need to know the material. Being humble and admitting you don’t know the answer to a question is much better than the alternative
  • You are the expert in your own experiences – don’t be afraid to share them with the authority that you have (thanks James Bach for your feedback)
  • Ask for feedback. Learn from your mistakes. Improve on your qualities. There’s always room to improve.
  • Focus on your motive. Why are you presenting that specific topic? Remember that when you are about to quit, or too nervous, there is a reason you are doing this, and whatever it is, it is worth it! Keep at it.
  • Those who were trying to help me told me it will get better with time, the more presentations you do the easier it gets. I don’t know that yet, but I’ll let you know when I do!

Test Art…

And finally here are scketchnotes from two very talented testers that pretty much summarize my talk better than I could ever do:

Finally, I just wanted to encourage everyone out there that has an idea or an experience (good or bad) worth sharing – to get out there and to do it. I was absolutely out of my comfort zone presenting, and if I can do it, believe me, anyone can! And if you need help, drop me a line, I’d love to help you on your journey.

Let’s Test – It Is All About The People

There20140604-214842.jpg are lots of great blog posts about Let’s Test 2014 appearing every day, and a common theme between them is about how Let’s Test is about conferring with other like minded testers.

I had heard about how much happens in the corridors, over lunch and dinner and in between sessions at Let’s Test. I was wondering if that was really the case and I am happy to report that it is just like that.

People seem to be so relaxed at Let’s Test that sharing ideas, experiences and challenges seems to just happen.

Let’s Test prides itself about being a conference for testers by testers, and one of their stickers read: Because people matter. The whole conference is about getting people to talk. The fact that we are all stuck in a resort far from everything for 3 days helps a lot! There is no escaping – not that anyone would want to anyway. I loved the fact that we had all meals taken care of, no planning required. All we had to do was to serve ourselves and chat some more.

A few things I was reminded of during my interactions with other amazing testers at Let’s Test:

  • Everyone has something to learn and something to share. At Let’s Test most attendees are eager to do both! Even the most introverted people are seated around a table talking testing
  • As soon as you start talking about your testing challenges with other testers you realize that you are not alone. Some challenges we face in our industry seem to be epidemic and the more we talk about them, the more we can use the power of the collective to try to solve them
  • You never know when you are going to have your next light bulb moment, so keep the interactions going, participating in the community, discussing and serendipitous learning moments will happen when you least expect!

I’m glad I made it to Let’s Test this year. Thanks to everyone that I had a chance to chat with. And I’m certainly going to be there next year again doing much of the same thing!

When do registrations open for Let’s Test 2015?!

 

Let’s Test – Here, Now and Us

So, Let’s Test 2014 is over. Much like Elvis, almost everyone’s left the building, only a few of us remain as we wait for our respective flights home.

I’m tired, and for those who have attended Let’s Test before you’d know why. But, although I’m tired, my mind is on fire going over all the events that took place over the past 3 days. Wow, was it only three days? Definitely feels like more! So instead of resting, I’m here writing this post (first of many I’m sure – I couldn’t possibly synthesize everything in one post) about conferring the context-driven way.

I arrived here on Sunday (May 25th) and I had a plan, I wanted to live blog some of the sessions I attended (following on Michael Larsen‘s footsteps), instead of just taking my usual notes on Evernote and keeping them for myself. Why not let others impart a little about what was going on here – after all last year it was me seating at my desk at work, following on Twitter what was going on in Sweden, hoping that people would keep tweeting and blogging about it. This year I was one of the lucky ones here and wanted to pay it forward.

Well that was the plan until I attended the first workshop of the conference on Monday morning: Steve Smith‘s on Managing Change: Knowing When and How to Intervene based on the Satir Change Model. Thanks Martin Hynie for the tip to attend this workshop, there were so many great workshops at the same time, chances were I was going to miss this one if you didn’t tell me about Steve’s presentation skills magic. One of the first things he did (after making everyone stand on their feet and make groups to introduce themselves) was to write this on the whiteboard:

20140528-234948.jpgWhich means: forget about where and how you are going to use this later, or anyone else – you are here now with the rest of us for a reason. Make the most of it. It may not sound like much, but the way Steve delivered it made me re-think my strategy at conferences. I often take lots of notes so I can refer to them later, which I often do. But sometimes I can do that to the detriment of actually participating fully on the session and taking it all in.

So, in true CD style, after I was presented presented with this new information I changed my previously designed script and decided to go the completely opposite direction: I decided not to blog live – or to take any notes for that matter. I was fully present at Steve’s workshop and entirely engaged in the activities instead. It was liberating and I learned so much about how to navigate through change, and how to be better at helping others too. I met a ton of amazing testers in that session that I’ve got a feeling I’ll be friends with for a long time.

The workshop was about change and how to cope with it. I learned lots about that. But I also learned from Steve that by being fully present and engaged on whatever you are doing right now can have a major impact in what you experience. I used to think that notes were going to help me remember what I learned, but what I realized was that by experiencing something fully I’d remember them just as much.

I still plan to share some of the learning I took away from Let’s Test, I still want to pay it forward. But I just had to adapt to a much better plan, instead of blindly following the plan I had set. Not only projects unfold over time in ways that are often not predictable, but also many other things in life too!

[Yet Another] 2013 Retrospective

Over the past few days I’ve been inspired by other testers who blogged about their 2013 retrospective. Seems like 2013 was a great year for testers and testing everywhere. This inspiration was all I needed to get my creative writing juices flowing again. I haven’t blogged in ages and you’ll see why below.

APRIL 2013

Critical Thinking for Testers and WeTest Workshop
In April 2013 I made a last minute decision to fly to Wellington, NZ to attend Michael Bolton’s Critical Thinking for Testers Workshop. There are certain courses that do not add much to what you already know. This course was not one of these!  I’m glad I made the journey to NZ as the course was very insightful and deeply impacted the way I test and see things. I was challenged and inspired by Michael. I was also very fortunate to be able to attend the Wellington Testers (WeTest) Workshop as someone cancelled last minute. WeTest is a LAWST style meetup where one person presents, followed by facilitated open season. Michael Bolton talked about Regression Obsession, a topic I was very interested on as back then I was working for a company that suffered from that, and I was trying to change it.

JUNE 2013

WTANZ Revival
After attending a couple of Weekend Testing Americas sessions at the fun hour of 3 AM on Sundays, I was persuaded by Michael Larsen to revive WTANZ, Australia and New Zealand’s chapter of Weekend Testing. With Michael’s help, and the support of many testers downunder, such as Anne-Marie CharrettDavid GreenlessRichard Robinson, Oliver Erlewein and most recently Dean Mackenzie (WTANZ’s newest co-facilitator), WTANZ re-started with a bang in June. We’ve had 6 session since June and have a great year planned for 2014! Stay tuned!

JULY 2013

KWST3
I went back to NZ in July for the third Kiwi Workshop on Software Testing (KWST3). The topic was ‘Lighting the way: educating others and ourselves about software testing. Raising a new generation of thinking creative testers’. We all shared our experiences on educating other testers and ourselves. Yet another challenging and inspiring workshop!

AUGUST 2013

OZWST2
Straight after KWST, came the second Australian Workshop on Software Testing (OZWST2). I attended the first OZWST in 2012 in Adelaide and was not going to miss the second, especially because it was in Sydney, just down the road from me. We had Rob Sabourin as the content owner and the topic was ‘Collaboration in Testing’. In true context-driven style we changed the format of the workshop a bit to suit the ER’s that were being presented and the attendee’s experiences.

Move to USA
Then it came the big one: after 16 years, it was time to say goodbye to Australia and move on to the land of opportunity! In August 2013 we moved to Miami Beach, FL and since then life has been just as chaotic as it has been fun!

Awesome Trinity of Testing
In August all the stars aligned over Madison, WI and three testing events happened back to back, with the main attraction being CAST 2013.

This was my first time ever at CAST. I was expecting it to be great, but it exceeded all my expectations. I met so many of my Tweeps there, people I felt I knew, however all I really knew was their icons on Twitter and a few 140 character long conversations. It was a surreal experience! To some I had to introduce myself as ‘testchick’ which was hilarious! Having a large crew from downunder at CAST also made it special. I planned to write a whole blog post about CAST this year but life got in the way. Also, thanks to my mentor and friend Anne-Marie Charrett I did a lightning talk at CAST about How to Introduce CDT in Factory Environments.  It was terrifying, but a challenge I needed to tackle as public speaking is not my strong suit. Luckily I didn’t pass out, or make too much a fool of myself and survived the experience with lots of lessons learned.

Before CAST, there was Test Retreat, a 2 day Open Spaces style conference where the attendees set the agenda. The topic was “Deliberate Practice”. And the day after CAST Test Leadership Camp took place. One lesson I learned is that 3 conferences in a row is probably too much! Especially when they are all collaborative conferences.

Miagi-Do
After the first day of CAST a bunch of us hit the Great Dane pub. There I cornered Michael Larsen and didn’t leave him until he decided to give me my Miagi-Do entry challenge. It was not the first time I asked, Michael is a busy man and I had asked many times before, however I wasn’t going to miss this opportunity! And it paid off, after doing the lightsaber challenge I did well enough to enter this not-so-secret society.

NOVEMBER 2013

New Country, New Job
In November I started a new challenge, I took a job with SmartBear down in the Florida office. The first few months are always a massive learning curve, so I’m waiting to feel like I can keep my head above water – but you know how it goes in testing, that may never really happen!

DECEMBER 2013

Workshop on Self-Education in Software Testing (WHOSE)
And to close the year off I attended WHOSE in Cleveland, OH. This was by far the most challenging and mentally exhausting workshop I’ve ever attended. The aim of the workshop was to create a skills inventory for context-driven testers with the purpose of promoting self-education in testing. We covered a lot of ground in 2.5 days, we worked really hard! And every single one of us left with homework to be completed in the next few months. Watch this space, there’s something really cool being put together.

Phew, that is it, 2013 in a nutshell.

Wishing anyone that gets to read this a fabulous 2014!

Weekend Testing ANZ is Back!

Earlier today Weekend Testing ANZ was re-ignited. A global group of 19 tester came together to test, share and learn. It was a truly amazing experience. 

It was my first time facilitating a Weekend Testing session, so I was nervous to say the least. I was unsure if anyone was going to turn up at all, and if anyone did, if the session was going to be productive and worth while. It was at the forefront of my mind that the session happens on  the weekend, and people have lots of things to do and places to be and they were all choosing to be here – so the minimum I could do was to put up a fun session for those who chose to give up 2 hours of their weekend to be here. 

A lot of planning went into it. I started thinking about mission, charters and applications over one month in advance. I had the help of many experienced testers in the planning stages, which helped tremendously. Special mention to Michael Larsen the founder of Weekend Testing Americas who is experienced in facilitating Weekend Testing sessions and guided me through the whole process. 

There were many others who provided encouragement and support, and there were a lot of people twitting about the session, which helped the attendance too! Thanks to everyone who one way or another helped make this session happen! 

ANATOMY OF A WT SESSION

ImageFor those who don’t know what Weekend Testing is, it is a global community of testers that get together over Skype, mainly on weekends to test, practice exploratory testing, share their knowledge and learn from others. The typical structure of a Weekend Testing session is as follows:

  • 5 min introduction
  • 10 min setting the mission and charter(s)
  • 50 min testing
  • 50 min debriefing

During the Testing stage, participants follow the mission and charter and test the application. Testing can be done individually or in pairs in separate chat sessions over Skype. Questions are asked, defects reported and discussion begin to emerge. 

At the Debriefing stage, participants discuss their findings with the mission/charter in mind. 

For more information and to find out about next sessions check Weekend Testing’s website and follow @WTANZ_@WTAmericas,  and @WeekendTesting on Twitter. 

WTANZ 13

For this session the mission/charter was testing www.mail.com using Oracles. As we didn’t have test cases, formal requirements or any other information deemed necessary by some to perform testing, the focus was to use Oracles to help in identifying issues and guide the test. 

As defined by Michael Bolton: “An oracle is a way of recognising a problem” and are also principles that “like all heuristics, are fallible and context-dependent; to be applied, not followed”. Interesting discussions emerged based on this topic, and participants highlighted that combining oracles when trying to put a case forward (i.e. bug advocacy) could help make a stronger point. 

The role of bias was also discussed in the context of using Oracles, as well as using personas testing in conjunction with Oracles. 

The strengths and weaknesses of the application were mentioned, and many participants showed frustration with the website’s UI, especially during account creation. There was also confusion as to the application’s purpose – was it a news aggregator or a mail client? The website’s documentation didn’t offer much help. 

The transcript and more details about this session can be found here.

LEARNINGS

Nervousness aside, I had fun at the session. I have learned that no matter how much you prepare in advance, that these sessions are dynamic and that we cannot always foresee all issues that could happen (i.e. technical issues I had just before the session was about to start). It is good to plan, but we need to be prepared and embrace the unexpected. And that goes for other testing too. That is another reason why meticulously preparing extremely detailed test cases ahead of time can potentially be an futile exercise, as only when we sit in front of the application, with mouse and keyboard at hand will we really know what needs to be tested. We cannot foresee many of the scenarios to be tested in advance. 

I also love these sessions as it is a place where testers are not judged and are free to make mistakes, to ask questions and to learn by experience. Everyone in these sessions are eager to learn and share, from the least to the most experienced. And every time I join in I learn something new – that is the beauty of Weekend Testing!

Stop Ignorance in Testing

Scripted testing… There are so many things wrong with it, its hard to know where to begin. Many test experts and prolific bloggers have written about this theme, therefore there is an abundance of literature about topics such as scripted testing vs. exploratory testing out there.

The highest form of ignorance is when you reject something you don’t know anything about. Wayne Dyer

ST is like an old wives’ tale, an urban legend, something that gets passed down from generation of testers to generation, and gets propagated and perpetuated due to the ignorance of testers everywhere. I was once an ignorant tester myself, who like many others learned from another unenlightened tester that the way to test is to write countless numbers of test scripts, with detailed steps, which will be diligently reported upon daily and you will get a gold star if you execute the pre-set quota of test cases for the day.

In this day and age however, there is no excuse to be uninformed about anything, which is why it shocks me how many testers still don’t know there are alternatives to scripted testing, especially here in Sydney.

This week I had coffee with a ‘seasoned’ tester, and the conversation about exploratory testing came up. This tester had heard about it, but like many others, had assumed that it was just ad-hoc-free-for-all testing. So I started asking questions about the validity and pitfalls of scripted testing with them. I find that this topic comes up very often in conversations with other testers, and I feel I’m repeating myself constantly, as I never pass up the opportunity to talk about context-driven testing, exploratory testing, and other fun topics like that!

Like I said in the beginning, there is plenty of information on the web about why scripted testing is not the way to better testing. Below is a list I had on Evernote, which I usually send on an email to testers that are interested in learning more about ET:

Every time I have an opportunity, I attempt to stop ignorance in testing. I chat and challenge testers that are still on the dark side. The list proposed above is just a starting point for testers interested in learning the craft of testing, for those looking for alternatives to what they know, deep down is fake testing!

Should Testers Learn to Code?

I attended CITCON Sydney 2013 this last weekend. Let me start by saying that if you haven’t attended one, you don’t know what you are missing out! These conferences are held all around the world and they are a great forum for idea exchange. If there is one happening near you, I strongly suggest you attend it!

The creators describe CITCON as:

The Continuous Integration and Testing Conference, a world-wide series of free Open Spaces events for developer-testers, tester-developers and anyone else with an interest in Continuous Integration and the type of Testing that goes along with it.

CITCON 2013CITCON 2013 Sessions

In a nutshell, you show up on a Friday night, and after a brief introduction to how CITCON works (by the way, it is pronounced ‘kit-con’!) everyone present has the opportunity to suggest topics. After the topics have been suggested by all interested in doing so, we all vote for topics we’d like to attend. Hence the agenda for the conference is set by the attendees. What a genius idea!

Should testers learn to code?

In one of the sessions I attended, the topic of programmers learning to be better testers came up. As far as I could tell, the room had more testers than developers, I’d say 2:1 ratio. Hence when this subject came out no objections were raised and almost everyone agreed that programmers could be better at what they do if they became better testers (or learned to think a little more like testers). However, when the reverse argument popped up it was a different story altogether.

It was interesting to see how testers reacted to the comment that we could become better testers by learning  to code. There was an interesting debate among the testers presents, some were in favor, others were not.

As a testers who knows a bit about coding, I find that after I learned to program I was able to find different types of bugs, to communicate in more meaningful ways with developers and to automate a few repetitive tests for myself and my team. In my opinion the time I spent learning how to code was time well spent.

During that CITCON workshop, someone raised a very valid point saying that we are all in this together, we (meaning testers and developers) should have the bigger picture in mind (i.e. the product we are delivering, the customer and the users). Therefore its not only important for developers to learn better testing skills, but it is just as important for testers to learn coding skills.

It is intriguing to me that developers in the room seemed OK with with fact that they should learn testing skills, however the same was not true for all testers. I don’t want to generalise here, the sample population from both industries in the room was extremely small, nonetheless this is not the first time I hear this argument. And I believe there are good arguments going both ways.

Some good testers cannot code

I once asked Elisabeth Hendrickson this very question and she said that she does not believe every tester should know how to code. Elisabeth wrote an interesting post about this topic, and in it she expressed her views that there are very good testers out there who cannot code:

Testers who specialize in exploratory testing bring a different and extremely valuable set of skills to the party. Good testers have critical thinking, analytical, and investigative skills. They understand risk and have a deep understanding where bugs tend to hide. They have excellent communication skills. Most good testers have some measure of technical skill such as system administration, databases, networks, etc. that lends itself to gray box testing. But some of the very best testers I’ve worked with could not have coded their way out of a For Loop

So like anything else in context driven testing it depends!

Not all testers should HAVE to code

Now, let me clarify a very important point. I am not a developer, never been one, nor do I want to be. I know how to code well enough to help me be a better tester, however that does not make me a developer by any stretch of the imagination. There is an important distinction between testers having to code and testers knowing code. Clearly, not all testers should code.

However in my experience, knowing a programming language helps a tester get a better perspective when testing. It helps facilitate dialog between testers and developers and it comes in handy more often than not. Michael Bolton’s post At Least Three Good Reasons for Testers to Learn to Program: tooling, insight and humility. It provides compelling arguments, worth reading.

Not all testers enjoy coding

I am a tester, and I enjoy the challenge and the craft of testing. I learned a bit of JAVA and C# at uni, and since then I decided to brush up my skills, and I’m currently working towards a JAVA certification. I have been able to use the skills I’m learning to simplify testing tasks, such as writing Selenium scripts for data creation and regression checking.

However I enjoy coding. An argument I have heard is that not all testers enjoy coding, and that is OK. Testing skills and programming skills are very different, however the question here is: just as programmers who learn better testing skills become better programmer and better team members, wouldn’t the same be true for testers that know how to code?

Of course, as with anything in testing, it depends. It depends on the person, on the type of testing they are doing, on the application they are testing.

Increased employability 

Finally, as Elisabeth Hendrickson explains in her post, 79% of testing jobs advertised in the USA require some form of programming skills, from basic to advanced. Up to 90% of testing jobs in agile environments require the same skills. Elisabeth concludes that at a minimum professional testers should know SQL.

Inevitably, testers that wish to continue to be current and competitive in the international job market, will need to learn at least the basics a programming language, weather they like it or not.

Look at race car drivers for example. They are not required to know mechanics or engineering, however you’d be hard pressed to find a race car driver and does not know the basics of mechanics and that is not interested in how his car works. They do not need to be fully qualified mechanics, but they know they are the ones racing the car and sooner of later they’ll need to troubleshoot issues they encounter on the racetrack. The more they know about engines, the earlier they can detect issues and relay them to the mechanics team. In the same way, testers do not need to know how to code, and there are great testers out there that don’t. However software is our craft, it is our race car’s engine, and if we don’t know the first thing about how it runs, we could be missing an opportunity to become even better at what we do. And if there are good testers that don’t know how to code, imagine how much better they could be if they did.

When More is Less

I often find myself having the same frustration conversation about test resources, often with different people, but sadly sometimes with the same person.

If you are a seasoned tester, I’m sure you have come across this situation before. You and your team started test execution and a few weeks into execution mass panic hits project members and everyone tries to offer a solution on how to improve the ‘efficiency’ of test execution.

Although most people have the best of intentions, the constant ‘are we there yet’ question is not really helpful.

Far too often development gets delayed and testing gets squeezed as the delivery date remains unchanged. I find the closer we are to the finishing line, the more people try to ask if they can do anything to help getting testing over the line. That is when the question ‘can we add more testers’ to help execution starts getting asked.

I find this like of questioning troublesome and there are a number of fallacies that need to be addressed when answering such questions. I’ll address two in this post:

FALLACY #1 – ANYONE CAN TEST

A person who asks this type of question often does not have the slightest understanding of what testing is really about. They have this idea that ‘anyone can test’ and that testing is just a matter of following a set of steps – hence the more people we add to the execution phase the better right? Wrong! Oh so wrong.

I’m not going to get into the argument that good testing is so much more than following a set of steps, as I believe this topic has been covered in depth in the blogs of several prolific testers and industry leaders. All thinking testers agree that this is not the case, that testing is in fact an intellectual process, instead of a mechanical, repetitive one. However the ‘anyone can test’ theory is not the only issue with the ‘lets add more testers’ proposal.

FALLACY #2 – MORE IS ALWAYS MORE

There is a fundamental principal in economics called the law of diminishing returns. In a nutshell, it states that at first, by adding more people to any given task will increase productivity, however (all else being equal) it will get to a point when the more people added to the task the less productive the task will be.

“Consider a factory that employs laborers to produce its product. If all other factors of production remain constant, at some point each additional laborer will provide less output than the previous laborer. At this point, each additional employee provides less and less return. If new employees are constantly added, the plant will eventually become so crowded that additional workers actually decrease the efficiency of the other workers, decreasing the production of the factory.”

The concept is quite simple and in testing it manifests itself in many different ways:

Experienced testers become trainers – and are taken away from hands-on testing as new testers are added to the team. The productivity of these highly important team members is impacted, affecting the productivity of the entire team.

Duplication of effort – for example, when too many testers are testing the same functionality, the same defect could be raised more than once. Or if an area of the system is blocked due to a defect, many testers are impacted.

Surrogate testers – another problem is that often these ‘testers’ are not testers at all. They are other project members (business analysts, developers) pulled out of their own day-jobs to execute a set of test cases. Some of these team members have no interest in becoming a tester, and therefore do not have the mindset necessary to conduct good testing, which brings me back to the anyone can test fallacy. It is sort of a cycle.

Law of Diminishing Returns

There are however, some instances where more testers will improve throughput and productivity of the test team. As with anything in testing the right answer will depend on the context of the project and the team. After analysing my team’s structure and the project’s status if adding more testers will be more of a hinderance than a help, I often use the arguments above to initiate discussions.

Metric Madness

It’s almost Christmas. I live and work in Sydney, and this time of the year is pure madness around here. I’m guessing it is madness everywhere in the developed world, where the meaning of Christmas seems to be purchasing as much stuff as you can, as close to Christmas day as possible. People seem so desperate, and is it just me or it feels like the world is coming to an end? I wish I could blame the Mayans, but every Christmas is the same – not only this one. The spirit and meaning of Christmas is totally lost in the madness.

This insane time of the year reminds me of metric madness or defect madness. Every tester has been through it at least once, but more likely several times. It is when testing becomes a numbers game, when it loses its core purpose and value. When it is all about the number of defects you raise, the number of test cases executed the amount of reports sent. Metric madness happens when testing is no longer about reporting quality or about finding important information abou the software, when it is all about the numbers.

Seeing developers and testers spend hours classifying defects, discussing if the defect is a SIT or UAT defect, or if it is really a severity 2 and not a severity 3 almost brings me to tears. If all that effort and time was spent on exploring, learning about the system and providing feedback wouldn’t we deliver better software or at least better information to decision makers?

Or all that wasted time in trying to ensure we execute the ‘correct’ amount of test cases each day, to show that we are on track. Testers that take their craft seriously know that numbers mean nothing, that numbers can be manipulated and misunderstood and worse, they can mean different things to different people.

I am in the midst of a project going mad with metrics. I’m trying to change it, one report at a time – however it is not an easy process. To change people’s minds, to explain the obvious, that test case execution percentage doesn’t mean much is a long arduous process. However as a context-driven tester, that is the only path that is worthwhile,  productive and, above all, ethical.

“Metrics that are not valid are dangerous”

And on top of that, metric madness reaches a boiling point when both testers and developers’ output is measured by these same metrics (such as number of defects raised). It is not only insanity, but it is unproductive and it destroys teams, creating huge chasms between teams.

As I write this I wonder, how can something as plain as the nose on one’s face, a truth that is so obvious to some, be so hard for others to understand. How can that be? One thing I know, us ‘sapient’ testers out there, have an obligation to work towards stoping the madness. The real question is: how can we do that faster and more effectively. Well, I reckon that will have to be the topic for another post!

Dilbert

If you would like to read more about testing metrics, these two blogs are a great place to start: Why pass vs. fail rates are unethical and Metrics, Ethics & Context-Driven Testing.