Software systems keep getting more powerful, more connected, and more critical to how organisations operate.
Yet if you talk to experienced engineers, a strange pattern emerges:
despite better tools, faster hardware, and more sophisticated platforms, software design feels harder, slower, and more fragile than it should.
Teams grow, processes mature, and budgets increase — but design quality does not scale in proportion. Instead, complexity accumulates. Specialist knowledge becomes a bottleneck. Coordination costs rise. Trade-offs harden into permanent constraints.
This isn’t a tooling problem.
It’s not a talent shortage.
And it’s not something agile ceremonies or better documentation can fix.
The core issue is that software design does not scale economically with system complexity.
As systems grow, organisations respond in predictable ways:
Each of these helps — briefly.
Over time, the same failure modes reappear:
We’ve largely accepted this as “just how software works”.
That acceptance is the real problem.
In mature industries — manufacturing, logistics, construction — complexity is handled through specialisation and supply chains. No single organisation designs everything end-to-end. Instead, specialised contributors deliver well-defined components, each with clear responsibilities and interfaces.
Software, by contrast, still treats design as a largely monolithic activity.
Even when code is modular, design expertise isn’t. Architectural judgement, performance tuning, concurrency modelling, security reasoning — these live in people’s heads and team processes, not in reusable industrial units.
The consequences are structural:
We’ve industrialised code reuse.
We haven’t industrialised design reuse.
Consider a system under load.
It works fine initially, but as usage grows, latency spikes appear in edge cases. Eventually, a performance specialist is brought in.
They analyse the system, identify cache invalidation paths, memory layout issues, and scheduling interactions. Changes are made. Performance improves.
Six months later, the system evolves. A new feature interacts badly with the original assumptions. Performance degrades again.
Nothing “failed” — except the scalability of expertise.
The specialist’s reasoning wasn’t preserved as a reusable design input. It remained embedded in human context, documentation, and tribal memory. Each revisit incurs the same cognitive and coordination cost.
Multiply this pattern across architecture, security, correctness, and reliability — and you get today’s software economics.
A common objection is that this is simply a people problem.
If we hire better engineers, build stronger teams, and invest inculture, these issues go away.
There’s truth here — but only up to a point.
Strong teams absolutely outperform weak ones.
But even elite teams hit structural limits:
At some scale, the problem stops being how good the team is and becomes how design work itself is organised.
No amount of talent removes the need for an industrial abstraction.
Because specialist expertise doesn’t scale well, organisations quietly compensate by encouraging generalism.
Engineers are expected to:
This works — until it doesn’t.
Generalists are invaluable, but forced generalism dilutes depth. Critical design decisions get made with partial understanding because waiting for specialists is too expensive or slow.
The system still ships.
But latent risk accumulates.
That risk shows up later as outages, rewrites, compliance failures, or permanent architectural scars.
When platform assumptions no longer fit, consumers are stuck. The abstraction leaks, and deep expertise must be reintroduced — urgently.
Again, the issue isn’t bad execution.
It’s that expertise delivery remains time-bound and team-bound.
Let’s state the thesis explicitly:
Software design does not scale because specialist expertise cannot currently be delivered as a reusable, accountable, industrial input.
Until that changes, we will keep paying the same design costs repeatedly — no matter how good our tools become.
At this point, many people jump to AI or code generation.
That’s premature.
Automation can accelerate execution, but it does not solve the structural problem of expertise delivery. If anything, automation without clear responsibility boundaries amplifies risk.
The problem to solve first is organisational and economic:
Automation is a consequence of getting those answers right — not the starting point.
What abstraction are we missing?
In other industries, the answer was a supply chain.
In software, that idea sounds uncomfortable — even alien — because we’re used to thinking in terms of teams and projects.
In the next post, I’ll introduce the missing abstraction directly, and explain what a Software Design Supply Chain actually means in practice — without hand-waving or buzzwords.
This post is Part 1 of the series.
In Part 2, I’ll introduce the Software Design Supply Chain itself — and why it’s the only abstraction I’ve found that actually breaks the scaling trap.
If this resonates, follow along for the next post — or share it with someone who’s felt this problem but couldn’t quite name it.