8 Comments

That's a really interesting topic, Alessandra, thank you for bringing it up.

I think there are many opinions here and many, very different experiences. I have definitely seen a lot of crappy products that are being testing by decimated testing teams. And I've seen high quality software that was never touched by a professional.

My personal background is that I started my career (as a developer) surrounded by tested. We had major quality issues. The testers were a mixture of very skilled engineers and also some that could just click around and write tickets. There, I think the majority of bug was actually known by the test team, but it was a managerial decision to sleep them under the rug or downplay them. After that, for many, many years, I was involved in mostly startups where we couldn't afford to have dedicated tested. And the quality was always quite good.

I think, in the majority of cases, there's a causality vs. correlation issue. Teams that have testers and also underperform intersect quite a lot, but that doesn't mean that the testers caused it.

Often times, the decisive factor is not testing vs. not testing. It's social dynamic. People either care and are careful, or quite the opposite. You could hire testers if they don't, but people will still not care.

With the rise of concept like the DevOps methodology, more emphasis is put on engineering teams running their code on production. The same could be applied to testing. Ensuring quality IS part of development. A testing team can improve quality or make testing more structured. But it does NOT mean you are no longer responsible for the quality of what gets released.

Expand full comment

Also where regulatory and compliance burden is huge like telecom and banking. In case of without domain and specifications knowledge difficult for any developer to test the application. Even integation challenge also huge in telecom. Multiple products from different companies integrated and tested.

Expand full comment

Yes, I've worked in banking, finance, trading systems, insurance industries and all of those require a high level of subject matter expertise. And because they are all regulated industries, testers must also know how to properly document their testing due to regular auditing - so yes, in these scenarios, testers are often needed.

Expand full comment

I think modern teams need a testing mindset more than anything else. However it does depend on the org. I currently have two teams, one working on a legacy enterprise product with two monthly release cycles that has two testers on it. The other team works on FE modern with CI/CD. Guess which ones has fewer bugs? It’s probably relatively equal because the team without testers do try to have a testing mindset and do the things mentioned in the article. the wrinkle is the team without testers work in a shared repository where other teams may touch their work. So if those teams don’t have a testing mindset….lots of fun things happen!

For clarity testing mindset I class as validation and verification. Teams without testers validate/verify a much smaller subset than they should in my experience. I wonder if there is a way to improve that via a red team/blue team style psychology 🤔

Expand full comment

Hi Laurence, it’s great to hear from you! It sounds like you have your hands full and are navigating a broad spectrum of testing challenges. The two types of teams you describe sit at opposite ends of the testing challenges scale—if such a scale existed (and now I’m tempted to create one! 😄).

You’re absolutely right that validation and verification are both critical aspects of testing and in teams without dedicated testers, verification often gets deprioritized—either due to time constraints or a lack of expertise in structured test design.

I really love the idea of applying a red team/blue team approach to testing in teams without dedicated testers. It’s a great way to not only drive quality but also build awareness of how one team’s work impacts others. This kind of mindset shift is what separates high-performing engineering organizations from those constantly firefighting.

Looking forward to hearing more about your experiences!

Expand full comment

I would like to disagree.

Specifically, I'd say that most teams don't need a dedicated tester. Even if they operate in a high risk environment where mistakes can be catastrophic. In fact, I'd argue that in those cases especially you don't need a dedicated tester.

It's not that developers "slack off", it's that they think their work is done way earlier than it should be. In addition, since testing is someone else's problem, they are less sensitive to testability issues. A dedicated tester will also likely mean a crooked pipeline, where each feature (or a bug fix) can't be merged before being blessed by the touch of a tester.

Let's face it - most of the work currently done by testers is simple validation, without a lot of time to push the boundaries since most software leaves the hands of developers in a highly immature state. All this talk about critical thinking is really nice, but even as capable (I hope) tester, most of my time is spent doing simple housecleaning (or worse, dealing with office politics born out of this separation). Do we need a critical thinking expert with a keen eye to critique our work? Sure. we need it to the same extent that we need a software architect - to shine light on the points missed and to help come up with a strategy in complex cases. Other those parts? We'd better off having this task dealt by the same people creating the product.

And still, I said "most". Which teams need a dedicated tester? I can think of the following cases:

1. When the culture isn't mature enough. When we don't have that know-how in our team and we need someone to help us transition to a better place. Heck, this is where I'm spending my last few years.

2. When testing requires a shit-ton of domain knowledge, to the extent it's not reasonable to expect professional coders to understand it enough. In such cases, our tester will actually be experts of the relevant field. Examples? Well, I've once interviewed a person whose job was to "test" a radar system, using his full skill-set gained by a masters degree in physics. I'd imagine that some medical applications might need that kind of scrutiny as well.

Expand full comment

Hi Amit,

You say that "developers don’t slack off," but then state that "they think their work is done way earlier than it should be" and that "since testing is someone else's problem, they are less sensitive to testability issues." That’s exactly the issue—without shared ownership of quality, developers disengage from critical aspects of the software lifecycle, leading to preventable failures. The presence of a dedicated tester doesn’t create a “crooked pipeline”; what creates inefficiency is a team that lacks a quality-first mindset, where testing is an afterthought rather than an integral part of the development process.

Regarding your claim that "most of the work currently done by testers is simple validation," I have to ask—how many teams, organizations, and industries have you observed this in? Having worked with testers and engineering teams for over 20 years, I can confidently say that while some testing work involves validation (just as some development work involves routine coding tasks), great testers operate far beyond that. They enable teams to build better software by driving testability, designing intelligent automation strategies, and ensuring risk-based exploratory testing happens at the right level. They work alongside engineers—not downstream from them—to refine processes, optimize CI/CD, and embed testing into every stage of development.

I'm sorry your experience with testing has been limited to "simple housecleaning or office politics," that’s a sign of a deeply flawed quality culture, not an inherent issue with testers themselves. The modern testing profession has evolved significantly—leaders in this space are integrating testing into development in ways that reduce risk, increase efficiency, and accelerate delivery. If you’re not seeing that, I’d encourage you to engage with the broader testing community, where best practices are continuously being redefined to align with high-performing engineering teams.

And I agree that in industries such as those requiring deep subject matter expertise (medical, aerospace, radar systems, etc.), dedicated testers can be essential. The real question isn’t whether teams “need” a dedicated tester—it’s whether teams truly understand the role of testing in delivering reliable, scalable, high-quality software.

Expand full comment

Hi,

I'd say that "shared ownership" is a great rallying cry, but otherwise irrelevant. Even when there's high degree of collaboration, the whole point of having roles is to make sure people only need to worry about a smaller chunk of the problem - Some create the software, others create and maintain documentation, others are building a roadmap. We can call them developers, tech writers and product managers, or not. But when we do, even when everyone helps when needed, it's not the same as "everyone are doing it". Testing, in this capacity, is special - it's needed everywhere, and when it is happening, it doesn't need to happen on a separate place and have a rather invisible output (feedback, faster process, less surprises and problems - all easy to miss and hard to define). Having dedicated testers in each teams begs the question "when should the work go through those chaps?" It's very difficult to give the right answer (it shouldn't, consult when you do your own stuff). Just like having a central testing team that services many groups is creating bottlenecks, so does having a dedicated tester in a team, just to a lesser extent.

As for my observation - I have seen, and been part of, all of the effects you talk about - and it still holds that most of the grunt work needs to happen, and is taking the majority of time. The worse the culture is, the more time those simple validations take. Even in good places, where you don't spend your whole time stumbling on the basic functionality being broken, validation should happen, and is time consuming - whether we code the validation or execute it as an attended test. And yes, it's just like the routine coding work for developers, only with a lower entry bar, which means we don't need specialists to do this work.

Expand full comment