Software projects keep getting harder in ways that don’t show up neatly on burndown charts.
Teams get bigger, architectures get more layered, and yet the most fragile part of the system is often still a handful of design decisions that only a few people really understand. When those people are busy, leave, or simply disagree, progress slows and risk rises.
That’s not because engineers are getting worse. It’s because the way we organise software design hasn’t kept up with the scale and economic importance of the systems we’re building.
In Part 1, I argued that software design doesn’t scale economically with system complexity. We can add more developers, but we can’t easily add more design capacity without coordination costs exploding. In this post, I want to introduce the missing abstraction that explains why - and what changes when we add it.
The core thesis is this:
Software design fails to scale because we lack an industrial model for producing, combining, and reusing specialised design work—and a software design supply chain provides that model.
Most engineers intuitively understand that specialisation matters. We rely on people who are good at databases, performance tuning, distributed systems, security, UX, and so on.
But in practice, that expertise is delivered in a very specific way: through people embedded in teams, working on projects, synchronising via meetings, pull requests, and tribal knowledge.
That delivery model has consequences:
We compensate by adding process, documentation, and coordination layers. That helps - but it doesn’t change the underlying structure.
The real issue isn’t tooling or methodology. It’s that we treat software design as a monolithic activity, rather than as a composition of specialised contributions.
People often compare software to manufacturing, but usually in shallow ways: “code is like a factory”, “pipelines”, “assembly lines”.
The more useful analogy is this: modern industry scales through supply chains, not heroic generalists.
In a physical supply chain:
No car manufacturer tries to internally reproduce the full expertise of steelmaking, microelectronics, logistics, and materials science. They assemble systems by composing specialised inputs.
Software, by contrast, still behaves like a pre-industrial craft.
A software design supply chain treats software design as a composition of design contributions, rather than as a single, team-owned activity.
Each design contribution:
Crucially, the contribution encapsulates expertise, not just code. It represents a repeatable design process, not a one-off decision.
Design contributions are then combined via collaboration specifications that define how decisions interact across layers.
This shifts the unit of composition from “people in a team” to “executable design processes”.
Imagine a team building a data-heavy service that occasionally hits latency spikes under load.
Today, the usual options are:
In a software design supply chain, performance analysis is a design contribution.
The team provides a formal description of constraints and goals (throughput, memory limits, workload shape). The performance contribution applies its encapsulated expertise and returns concrete design outcomes: data structure choices, memory layouts, concurrency strategies, and explicit assumptions.
The team doesn’t need the specialist embedded. The specialist doesn’t need to attend meetings. Responsibility is clear: if the inputs were valid and the specification was met, the outcome is accountable.
Large organisations often centralise architecture to control risk. The result is predictable: queues, delays, and tension between “delivery” and “governance”.
A supply-chain model reframes this.
Architecture becomes a set of design contributions that:
Teams invoke architectural contributions as part of their design flow, rather than waiting for review. When constraints change, new contributions are introduced instead of silently mutating old ones.
Architecture stops being a gatekeeper role and becomes a reusable design input.
We already reuse code. Libraries, frameworks, services - these are familiar.
What we don’t reuse well is design reasoning.
Encapsulation is the mechanism that makes this possible. By hiding internal heuristics and exposing only specifications and outcomes, design contributions allow:
This is also where liability boundaries become practical. When something fails, you can trace which contribution violated its obligations, rather than arguing about intent or interpretation months later.
A common counterargument is that software design is too contextual to be industrialised.
Every system is different. Requirements change. Edge cases dominate. Surely formalising design processes just creates rigidity?
This objection is worth taking seriously.
The response is that the model doesn’t remove context - it makes it explicit.
Design contributions don’t eliminate judgement; they constrain where judgement lives. If a context truly requires bespoke treatment, that becomes visible in the specification. If a contribution can’t operate, it fails early instead of producing silent risk.
What feels like flexibility today is often just unaccountable variation. The supply-chain model replaces that with intentional choice.
Once you can treat software design as a supply chain, several things become possible:
Most importantly, organisations gain a way to invest in design capability structurally, rather than hoping that talent and process will compensate.
Series: Emergent Coding and the Software Design Supply Chain
This is Part 2 of the series.
In Part 3, I’ll focus on encapsulated expertise—and how this model lets organisations access deep specialists without permanently carrying the cost.
If this framing resonates, follow along for the next post—or revisit Part 1 if you want the economic problem statement that led here.