Skip to main content
search

Why testers and developers don’t trust each other (and how it shows up in production)

Quality rarely fails because people don’t care. It fails when ownership is fragmented – when quality is treated as everyone’s job, but no one’s responsibility. 

Quality is a team sport, not an end-phase.  

In high-performing quality organisations, trust between testers and developers isn’t left to chance. It’s built into how teams work across business, development, testing, and operations. This is anchored by a shared quality strategy that defines what “good” looks like across the organisation, not team by team. 

The defining trait of uneven quality environments is this: effort exists everywhere, accountability exists nowhere. 

Well defined cross-functional collaboration can prevent issues early, before the release train is already moving. The business clarifies what “good” looks like in customer terms. Developers build in smaller slices and validate behaviour as they go. Testers shape scenarios during refinement, so coverage is aimed at the risky paths, not guesswork. Operations brings production reality into the conversation early: monitoring, alerting, rollback, and what failure looks like under real traffic and real data. Feedback loops exist to continually improve the performance of teams, and move the organisation forward.  

Without this shared ownership model, teams default to role-based optimisation – speed here, coverage there, stability somewhere else – and quality fractures along organisational lines. 

When this alignment exists, teams aren’t guessing how to prioritise quality, they’re executing against a common intent. 

This isn’t “culture work” for its own sake. It links to measurable delivery outcomes. DORA’s research has long used throughput and stability metrics such as change failure rate and time to restore service. When those stability measures improve, teams usually have fewer last-minute surprises because collaboration across roles is tighter and feedback arrives earlier. 

In practice, DORA’s stability metrics improve not because teams test more, but because quality decisions are shared between business, developers, testers, and operations. These teams work as one, aligning early on risk, expectations, and ownership. 

In Australia, the stakes are often higher because customers expect digital services to work first time, every time. Many industries also operate under strict operational risk expectations. For APRA-regulated entities (banking, insurance, super), CPS 230 commenced on 1 July 2025 – lifting focus on resilience and operational risk management. This reinforces the need for a clear, shared quality strategy that aligns delivery teams to business risk, not just technical completeness. 

That makes “quality as a team outcome” a business requirement, not an engineering preference. 

A few signals you’re operating in this mode: 

  • Risk is discussed early in plain language the business understands. 
  • Testers and developers agree on what “done” means through shared conversations early in delivery, creating a common understanding of risk and expectations before code merges. Those expectations are derived from an agreed quality strategy, not local convention. 
  • Operations isn’t a downstream recipient. It’s part of release readiness. 

 

The handoff habit that turns delivery into “throw it over the fence” 

Poor collaboration creates siloed responsibilities, misaligned expectations, and the “throw it over the fence” mindset. It rarely starts with conflict. It starts with how work is structured, such as when strategy is absent or unclear – and teams are left to fill the gaps themselves. Part of this can be the result of work practices where teams have been given full autonomy to work as they please. Whilst this can be satisfying tactically, it results in mismatches and friction between teams at scale.   

The “throw it over the fence” mindset is not a cultural flaw, it’s a structural outcome of unclear ownership. When responsibility for quality is split across roles without shared accountability, handoffs become the default operating model. A mindset of “once it’s out of the team, it’s someone else’s problem” develops, and overall quality suffers.  

When work is organised as a handoff chain, each group can feel like they’re doing the right thing. Business writes requirements and expects delivery to match intent. Development builds to an interpretation of those requirements. QA receives work late and is expected to validate under deadline pressure. Operations is told a release is coming, rather than involved in making it safe. 

Each step makes sense in isolation. Collectively, they create uneven risk coverage and inconsistent outcomes. 

Then the language changes. People stop saying “we” and start saying “they”. 

You’ll recognise the lines: 

  • “Dev is done, it’s with QA now.” 
  • “QA is blocking release.” 
  • “It passed in test, so it’s not our issue.” 
  •  “Ops will deal with it if anything breaks.” 

That’s the handoff habit. It creates friction because responsibility is split from control. It also pushes discovery late.  

This is where uneven quality becomes expensive. Issues aren’t prevented, they’re discovered late, debated under pressure, and paid for repeatedly across rework, delays, and incidents. 

It also exposes the absence of a shared quality strategy – there is no agreed view of where risk should be reduced, so it gets debated late instead. Late discovery is what turns normal defects into trust problems, because the team is forced to debate risk when there’s no time to respond calmly. Teams begin to react in haste – rather than respond in collaboration. 

This is where a metric lens helps. DORA defines change failure rate as the proportion of deployments that require intervention after release, such as a rollback or hotfix. In “throw it over the fence” environments, this failure rate rises – not because teams lack effort, but because risk is managed late and in isolation, and quality is applied unevenly rather than strategically. 

What low maturity looks like when you’re living it

Inconsistent delivery 

Some releases feel fine. Others turn into a scramble. Releases increasingly have to be backed out, or ‘deployed dark’ with unfinished features toggled off in production. Testing windows shrink. Defects pile up late. Retesting is rushed. Scope gets cut at the end. Dates move, and confidence drops. 

Developers feel it as churn: rework, context switching, and last-minute merge pressure. Testers feel it as responsibility without time: you’re expected to provide confidence when the system gives you no room to create it. The business feels it as forecasting pain: commitments become hard to trust, and every release needs extra reassurance. 

Knowledge gaps 

Knowledge gaps form when the right people aren’t in the same conversation early enough. 

Testers don’t always get the “why”, so they test broadly rather than targeting the paths that matter most to customers. Developers don’t always see operational constraints, production data patterns, or integration behaviour, so features behave well in test but fail under load or with real data. 

The symptoms are predictable: 

  • Testers are not involved in the design stage  
  • Defects are raised without enough context to prioritise quickly. 
  • Fixes land without shared understanding of what else might be impacted. 
  • The same issues repeat because root causes aren’t owned end-to-end. 

Adversarial relationships 

Once pressure is constant and context is fragmented, defect conversations can feel personal even when no one intends that. 

Developers experience late defect discovery as disruption. Testers experience late involvement as being set up to fail. Operations experiences incident load as risk being pushed downstream. Business experience late defect discovery as increased cost, and decreased confidence in the process. At that point, trust isn’t lost because people are unreasonable. Trust is lost because the system produces repeated surprises. 

The repeatable behaviours that quietly break trust

Most organisations don’t choose mistrust. They create it through a small set of repeatable behaviours. 

QA is involved late

When testers first see work after it hits a test environment, they become the messenger of bad news. They find issues when the cost of change is highest and the timeline is tightest. Developers feel ambushed. Testers feel stuck validating risk they didn’t help manage, and forced into the role of the gatekeepers of production. 

Communication channels are unclear

If decisions live in private messages, scattered notes, and inconsistent tools, context fragments. Defects become debates because people aren’t aligned on what “done” means, what the acceptance criteria protect, and which risks are acceptable. Reporting and metrics are difficult to produce and lack context, as there is no clear source of truth for progress and status.  

Shared ownership of quality is resisted

If “quality” belongs to QA, development is pushed toward output and QA is pushed into gatekeeping. That creates role conflict. 

Shared ownership changes the tone. It means testers and developers agree on risk and “done” early. It also means automation is treated as a team asset, not a separate stream. You see fewer debates about whether a defect is “valid” and more focus on what it means for customer outcomes and release confidence. 

Cross-functional ceremonies are missing or hollow

Ceremonies only help if they produce shared understanding. 

When testers aren’t in refinement, risk thinking arrives late. When developers aren’t part of triage, defects become “someone else’s problem”. When ops aren’t in release planning, monitoring and rollback are afterthoughts. 

A small number of high-quality rituals prevent a lot of reworks: refinement that includes test scenarios and edge cases, triage focused on impact and patterns, and release planning that covers operational readiness, not just deployment steps. 

High-performing teams invert this model. Quality is embedded into collaboration, not inspected at the end. QA is involved early, accountability is shared, and cross-functional ceremonies are used to surface risk before it materialises – not to assign blame after it does. 

The production bill: rework, slower cycle time, and releases nobody trusts 

The true cost of uneven collaboration is rarely visible immediately. It accumulates quietly through rework, elongated cycles, and incidents that feel avoidable in hindsight. 

A poor quality culture increases rework, prolongs cycle time, and decreases trust in released software through increased production incidents. 

The mechanism is consistent. Late discovery drives rework. Rework displaces planned work. Planned work slips. Pressure rises. Shortcuts creep in. Incidents increase. The next release starts under more pressure, with less trust. 

This is the point where Culture & Collaboration becomes a ROI discussion. Not because culture is “soft”, but because the cost shows up as unplanned work, incident response, and recovery effort. 

A clear quality strategy and approach makes this easier to explain to leaders because it puts language and measurement around what teams feel every week: 

  • DORA frames stability through change failure rate and time to restore, which reflect how often releases create production pain and how quickly teams recover. 
  • Splunk’s downtime research (with Oxford Economics) has reported very large aggregate costs for major organisations and cited high cost-per-hour figures, reinforcing that instability drains money, time, and executive attention. 
  • CISQ’s work on the cost of poor software quality and technical debt supports the reality that quality problems compound. Over time, systems become slower to change and riskier to release. 

For a delivery team, this lands in practical ways people recognise straight away: 

  • Defect-driven rework keeps displacing roadmap delivery. 
  • Cycle time grows through fix–retest loops and release delays. 
  • Production incidents become normalised, and recovery slows when operational readiness is thin. 
  • Stakeholder confidence drops, which increases governance overhead and slows decisions. 

If you’re explaining uneven quality to leadership, this is the cleanest line to draw: uneven quality is often a symptom of uneven maturity in how teams collaborate and share ownership of risk. 

What to do next 

Culture & Collaboration is one of the most powerful drivers of quality outcomes because it shapes how risk is surfaced, shared, and acted on across delivery. When teams still rely on handoffs and late discovery, additional tools and frameworks rarely change the result. 

The practical next step is to measure where quality breaks down today – across people, practices, and ways of working – so improvement effort is focused where it will reduce delivery rework and production incidents. 

Take our Quality Maturity Assessment to identify where quality is uneven across teams, and receive a clear, prioritised set of actions to reduce production incidents, shorten cycle time, and restore release confidence. 

Why testers and developers don’t trust each other, Avocado Consulting - deliver with certainty

Find out where your quality maturity is limiting delivery

Take our QE Maturity Assessment today

Explore our Related Content

Why your test cycle keeps blowing out (and it’s costing every release)

Learn why poor execution is blowing out your test cycle and how to fix it.

Why defects keep coming back (and it’s not a skills problem)

Explore how to improve defect practices to shorten triage, prevent repeats, and protect release confidence.

Why your test suite keeps growing but coverage keeps shrinking

Learn how risk-based design, traceability, & focused techniques cut bloat & boost releases.

Close Menu