Table of Contents
API Contract Testing Is a Communication Problem
Lessons from a conversation with Lewis Prescott on the people, process, and operating model behind successful contract testing.

Lewis Prescott and Ikenna Nwaiwu
Most teams approach API contract testing as a technical practice. They evaluate tools, compare frameworks, debate Pact versus schema-based approaches, and ask how contract tests should fit into CI/CD pipelines.
Those questions matter. But they are not where contract testing usually succeeds or fails.
In my conversation with Lewis Prescott, co-author of Contract Testing in Action, the clearest message was this: contract testing is fundamentally about communication. The tooling exists because teams struggle to keep API expectations aligned once systems become distributed, ownership becomes fragmented, and multiple consumers depend on services that evolve independently.
“Contract testing is built on the fact that we’re bad at communication.”
In other words, contract testing (CT) is not just a testing technique. It is an operating model concern.
That distinction matters for engineering leaders. If you treat contract testing as another tool to roll out, adoption will likely be slow, noisy, and uneven. If you treat it as a collaboration mechanism between API consumers and providers, it becomes a way to reduce integration risk, improve delivery flow, and make API ownership more visible across the organisation.
This article explores the people and process side of contract testing: why teams introduce it too late, why communication breaks down, how responsibilities should be distributed, where tooling helps, where it falls short, and how leaders should think about success.
The real reason contract testing exists
Lewis put the problem simply: contract testing is built on the fact that we are bad at communication.
That may sound blunt, but it captures the reality of modern software delivery. In a monolith or a smaller system, keeping API expectations aligned is relatively straightforward. People are closer to the code, dependencies are easier to trace, and teams can often resolve mismatches through direct conversation.
Microservices change that.
Once an organisation has many services, many teams, and many consumers depending on many APIs, informal communication stops scaling. A provider may change a response field without knowing which consumers rely on it. A consumer may build against outdated documentation. A team may not know who owns an API. Documentation may say one thing while production behaviour says another.
Contract testing exists to make those expectations explicit.
It gives consumers a way to describe what they need from a provider. It gives providers a way to verify that they can still satisfy those expectations. And when implemented well, it gives both sides a shared source of truth that is more reliable than stale documentation or tribal knowledge.
But the key phrase is “implemented well”.
The tool can store contracts, run verifications, and surface failures. It cannot, by itself, create ownership clarity, trust between teams, or disciplined change management. That is why contract testing has to be understood as a people-and-process practice before it is understood as a tooling practice.
The first mistake: introducing CT late
One of the most common problems Lewis sees is that teams introduce contract testing after they already have a mature testing setup.
By that point, they may already have:
integration tests
mocked unit tests
schema tests
hand-maintained test doubles
environment-based validation flows
When contract testing is introduced at that stage, it often feels like duplication. Stakeholders ask what new value it adds. Teams see it as another layer of maintenance. The cost of adoption appears high because the organisation is trying to retrofit a collaboration model into a delivery process that has already evolved without it. As Lewis put it, "[Teams] introduce contract testing too late... then it's too late by that point to kind of retrofit it, and so the cost of value seems too high."
This pattern will be familiar to anyone working in API governance. The same problem appears with APIOps/GitOps and standards automation. Teams often realise the need for these practices only after complexity has accumulated.
At that point, the practice may still be valuable, but adoption becomes harder. You are not just adding a tool. You are changing habits that have already become normal.
With contract testing, the ideal moment is much earlier.
Lewis described the “Goldilocks moment” as the point where the API contract has started to stabilise — ideally before dependent teams begin building in parallel. For example, if a frontend team needs a backend API, the frontend team should not have to wait until the backend is fully implemented. What they need is a contract they can build against."Before the UI starts working on it, you need an API contract in place," Lewis says. "Even if it's kind of rough, or it's going to change a little bit, you still need that contract in place."
That contract may still evolve. It may be rough at first. But once it exists, both teams can work with clearer expectations. The frontend team can progress without waiting for the full backend implementation. The backend team can validate that changes continue to meet consumer expectations. Integration risk moves earlier in the lifecycle.
That is where contract testing creates leverage.
If it is introduced only after integration environments are already catching failures, the organisation has missed much of the opportunity.
Contract testing should enable parallel delivery
A healthy contract testing process allows teams to work in parallel rather than sequentially.
Without a contract-first mindset, teams often fall into a familiar pattern: the API team must build first, the frontend or consuming team waits, and real integration feedback arrives late. This creates dependencies, queues, and avoidable delay.
Contract testing changes the flow. Once the expected interaction is described, the consumer and provider can move independently. The consumer can build against the contract. The provider can implement and verify against it. Both sides have a shared mechanism for detecting whether expectations still align.
That is why timing matters so much. Contract testing is most valuable when it becomes part of the API lifecycle from the beginning, not when it is added after integration pain has already appeared.
For engineering leaders, this has an important implication: contract testing should be connected to API design, API governance, and delivery planning. It should not sit off to the side as a QA concern.
The questions are not only:
Which framework should we use?
Where should the tests run?
Who maintains the broker?
The more important questions are:
When do we define contracts?
Who owns consumer expectations?
How do provider teams learn about changes?
What standards do teams follow when writing contracts?
What happens when a contract breaks?
Are teams using contracts to enable parallel delivery, or merely to add another test stage?
Those are operating model questions.
Tooling works when teams are bought in
The best contract testing tooling reduces the friction of communication. For example, when an API has multiple consumers, a contract broker can make relationships visible. It helps providers understand who depends on them. It helps consumers publish expectations in a central place. It removes some of the organisational detective work involved in asking, “Who owns this API?” or “Who do I need to speak to before making this change?”
In that sense, tooling can become a communication substrate. But tooling only works when teams trust the signal.
Lewis described a common failure mode: too much red. If contract tests are flaky, noisy, or constantly failing for reasons teams do not understand, people learn to ignore them. They see a broken contract test and assume, “It’s probably not us.” Their unit tests pass. Their local checks pass. So they move on.
Then the service reaches an integration environment and breaks. At that point, the organisation has to backtrack. The failure arrives late, the investigation takes longer, and the contract testing process loses credibility.
This is not unique to contract testing. Any test suite with a poor signal-to-noise ratio trains teams to ignore feedback. But contract testing is especially vulnerable because it crosses team boundaries. If one team does not understand or trust the failure, the process becomes a source of friction rather than alignment.
The lesson for leaders is simple: do not measure contract testing adoption only by whether tests exist. Measure whether teams act on the feedback.
A contract testing process that produces warnings no one trusts is not a governance mechanism. It is background noise.
Training is not optional
Because contract testing spans teams, it requires more education than many other testing practices.
Lewis emphasised that many teams have never seen contract testing used well in practice. They may understand unit testing, integration testing, and end-to-end testing, but contract testing introduces a different model: the test is not complete until both sides have done their part.
That is not intuitive for everyone. Consumer teams need to understand how to express expectations clearly. Provider teams need to understand how to verify those expectations. Both sides need to understand that contracts belong in a shared location, not hidden inside one team’s local test suite.
Teams also need guidance on what belongs in a contract test. If contracts are too loose, they let real integration problems through. If they are too rigid, they become brittle and expensive to maintain. Teams need examples, standards, and coaching to find the right balance.
This is where a central enablement role becomes valuable. That role may sit with a lead test architect, an automation lead, a tech lead, a platform team, or a developer experience group. The exact title matters less than the responsibility: someone needs to help teams adopt the practice consistently without turning it into a bureaucratic bottleneck.
The goal is not to centralise all contract testing work. Feature teams still need to own the contracts relevant to their services. But they should not have to invent the operating model from scratch.
Clarifying responsibilities
Contract testing works best when responsibilities are explicit.
The consumer side needs to define what it expects from the provider. That includes more than endpoint names and response shapes. It may include naming conventions, data expectations, and the scenarios that matter for the consumer’s behaviour.
The provider side needs to pull those contracts from the shared location and verify that the service satisfies them. Providers also need to set up appropriate test data and understand the consumer scenarios being exercised.
Then there is the shared infrastructure. In many organisations, a DevOps, platform, or enablement team will be responsible for deploying and securing the broker or contract testing platform. This includes access control, environment configuration, and avoiding the fragmentation that occurs when multiple teams spin up separate brokers across the organisation.
That last point is important. A contract broker is only useful as a shared source of truth if teams actually share it. If every part of the organisation creates its own isolated setup, visibility disappears and the communication benefits are reduced.
At the same time, Lewis warned against putting too much process in the way of teams getting started. This is a familiar governance tension: too little structure creates inconsistency, but too much structure kills momentum.
The best operating model provides enough standardisation to make the practice reliable, while keeping adoption lightweight enough for teams to begin.
Bi-directional contract testing: useful, but
We also discussed bi-directional contract testing.
Lewis’s view was nuanced. Bi-directional contract testing can be useful, especially when an organisation is trying to retrofit contract testing into an existing setup. If teams already use mocks on the consumer side and OpenAPI specifications on the provider side, a bi-directional approach can compare those artefacts and provide a way to get started without rewriting everything.
That can be valuable. But it also introduces trade-offs.
The issue is fidelity. In a classic consumer-driven contract testing model, the contract represents an agreed interaction between consumer and provider. In a bi-directional model, the comparison is more indirect. Consumer mocks may not be a faithful representation of the real API expectation. Provider specifications may not fully capture runtime behaviour. A middle layer performs the matching, but the communication loop can become weaker.
Lewis’s point was not that bi-directional contract testing is bad. It is that it should be understood for what it is: a pragmatic way to start, especially when retrofitting into existing test assets.
If you were starting from scratch, you would not necessarily choose it as the primary model.
That distinction is important for leaders. A retrofit-friendly approach can reduce adoption friction, but it may not create the same level of shared understanding between teams. If your goal is only to add another compatibility check, that may be acceptable. If your goal is to improve communication between consumers and providers, you need to be careful not to remove the very collaboration that contract testing is meant to create.
The most important metric
When Lewis runs talks or workshops, he often asks teams a simple question: “How many issues do you find in the integration environment?”
The ideal answer should be “none”. In practice, he rarely hears that.
This is one of the clearest ways to evaluate whether contract testing is working. If teams are still discovering avoidable API compatibility issues in integration environments, then the contract testing process is not doing its job.
“If you have a smooth CI-CD process, you should never be finding issues in the integration environment.”
This metric matters because it connects contract testing to delivery performance. The goal is not to have more tests. The goal is to move integration feedback earlier so teams can release with greater confidence.
A healthy process should show:
fewer issues found in integration environments
fewer late surprises between consumers and providers
contracts defined before development starts, not after something breaks
teams working in parallel rather than waiting on each other
contract failures treated as meaningful signals, not ignored noise
less backtracking when services move through the delivery pipeline
Some of these are quantitative. Some are diagnostic. But together they tell leaders whether contract testing is improving the system of delivery or merely adding another process step.
This is where contract testing becomes part of engineering governance. It gives leaders a way to ask whether the API delivery system is healthy.
Tooling decisions
Tooling still matters, even if it is not the heart of the problem.
One practical issue is how teams run the shared broker or contract testing platform. Some organisations prefer self-hosted options because they want tighter control over infrastructure, security, and internal standards. Others benefit from a platform-as-a-service model because it lowers the operational barrier to entry.
Lewis noted that platform-as-a-service can be useful when teams want to get started quickly but do not yet have the operational support in place. If a team has to get approval from several parties before deploying a broker, momentum can disappear before the practice has a chance to prove itself.
This is another operating model decision. The question is not only “Which tool is best?” The question is “Which option helps teams adopt the practice without creating unnecessary friction?”
For some organisations, self-hosting will be the right choice. For others, a managed service may help teams learn faster and demonstrate value earlier.
The worst outcome is not choosing the wrong hosting model. The worst outcome is allowing infrastructure debates to stall the behavioural change that contract testing is meant to support.
Where AI may help
AI is unlikely to remove the need for contract testing discipline. But it may reduce the upfront cost of adoption.
Lewis sees potential in AI-assisted scaffolding: generating the initial shape of contract tests, helping teams get started faster, and reducing the friction of writing tests by hand. That does not mean teams no longer need to understand the theory. They still need to know what good contracts look like, what should be tested, and how consumer-provider collaboration works.
But AI could reduce the blank-page problem.
There is also potential in change impact analysis. AI could help inspect consumers, identify which changes may affect which services, and give teams a clearer sense of blast radius before a change is deployed.
However, Lewis made an important caveat: the framework and infrastructure still need to be in place. AI can enhance a contract testing process, but it cannot replace the foundational work of defining ownership, storing contracts centrally, and establishing reliable verification flows.
AI may make contract testing easier to adopt. It will not make communication optional.
The leadership lesson
The most important lesson from my conversation with Lewis is that contract testing should not be introduced as a tooling initiative.
It should be introduced as a delivery improvement initiative.
The purpose is not to add tests for their own sake. The purpose is to help teams move faster without discovering API mismatches late in the process. The purpose is to make expectations explicit. The purpose is to improve communication between teams that depend on each other but do not always understand each other’s constraints.
That means leaders need to think beyond frameworks and pipelines.
They need to ask:
Do teams define API contracts early enough?
Are consumer expectations visible to providers?
Is there a shared broker or source of truth?
Do teams trust contract test failures?
Are responsibilities clear between consumers, providers, and platform teams?
Are integration environments still finding issues that should have been caught earlier?
Is contract testing helping teams work in parallel, or has it become another after-the-fact control?
These are not just testing questions. They are operating model questions. And that is why contract testing belongs in the broader API governance conversation.
Conclusion
Contract testing enables teams to verify service compatibility earlier, reduce integration surprises, and deliver software with greater confidence. But its success depends less on the tool than on the operating model around it.
If teams do not communicate, contracts will drift. If ownership is unclear, failures will be ignored. If contract testing is introduced too late, it will feel like duplicate effort. If the test signal is noisy, teams will stop trusting it.
But when contract testing is introduced early, supported by clear responsibilities, and treated as a collaboration mechanism between consumers and providers, it becomes much more than a testing practice. It becomes a way to make distributed software delivery more predictable.
For engineering leaders, the question is not simply whether your teams have contract tests.
The better question is: are your teams using contracts to communicate before integration breaks?

Most organisations already have API design guidelines, reviews, gateways, portals, and platform teams. Yet engineering leaders can’t answer two questions:
Is our API governance effective?
Are there missing API delivery gaps that expose us to risks?
These questions are difficult to answer from inside the organisation, because internal architects and platform teams know parts of the picture. But they lack the independence, mandate, or bandwidth to assess objectively.
Engineering leaders making investment decisions need an independent, evidence-based view of API governance and delivery that internal teams cannot credibly produce for themselves.
I give engineering leaders this independent assessment of their API delivery and governance risks.
Book a 30-minute conversation with me to determine whether an independent assessment will de-risk your platform investment. No prep needed.
About this newsletter
The Ikenna Consulting Newsletter delivers weekly insights on API governance operating models, helping engineering leaders understand why governance is an operating model problem, not a tooling problem.


