AI SystemsOperationsProduct_mangement13 March 20266 min readLee Leckenby // System Builder

The Compounding Human

AI doesn't replace people who think — it compounds them. The gap opening up isn't between people who have AI and people who don't. It's between people building intuition now and people waiting to see how it shakes out.

// FOCUS

AI agents, knowledge work, product thinking, compounding advantage

// AUDIENCE

Builders, operators, and AI-native product people

// FORMAT

Article

The wrong question

Most people are asking whether AI will replace them. That is the wrong frame. The real question is simpler and more urgent: are you on the right side of the compounding curve?

Compounding, as any investor knows, is the most powerful force in long-term outcomes. Small advantages applied consistently over time produce gaps that become impossible to close. The person who starts compounding ten years earlier does not finish twice as far ahead. They finish in a different category entirely.

The same dynamic is now playing out in how knowledge workers develop capability. Not dramatically. Not with a clear announcement. Quietly, through daily decisions about whether to engage with AI seriously or to wait.

What actually compounds

Here is the thing about compounding: it does not compound the tool. It compounds the person using it.

The people pulling ahead are not the ones with access to better models. Model access is broadly equal. They are the ones who have spent real time building intuition for how to direct agents — where they fail, when to trust output and when to challenge it, how to design workflows that scale without breaking. That knowledge is the asset. The model is just infrastructure.

This is not about prompting, despite how much energy has been poured into that framing. Prompting is a thin skill. What compounds is something deeper: the ability to think in systems, to decompose complex goals into bounded tasks an agent can execute reliably, to govern a team of non-human collaborators with the same rigour you would apply to a human team. Call it agentic thinking. It is not a technical skill. It is a product skill. A design skill. A systems skill. Most of the people who will be best at it are not engineers.

The PM advantage

Product managers have been training for this without realising it.

Think about what the role actually demands. You achieve outcomes through people you do not directly control. You hold the context that connects strategy to execution. You ask the question everyone else is too close to see: does this actually solve the problem? You know when a system is drifting before it breaks. You design for behaviour, not just function.

Every one of those skills transfers directly to directing agents. Managing an agent team is not so different from managing a cross-functional team on a complex product. The communication layer changes. The underlying discipline does not. The PMs and product thinkers who recognise this early and lean in will compound faster than most technical people currently expect — not because they will out-code anyone, but because they already know how to govern complex systems toward outcomes. That is the harder skill.

The divide opening up

The gap forming is not between people who have AI and people who do not. Access is not the constraint.

The gap is between people building compounding intuition and people waiting to see how it all shakes out. The people waiting are not being idle. They are making a decision with a cost they cannot see yet. This is not a warning. It is an observation about how compounding works. The earlier you start, the less effort it takes to stay ahead. The longer you wait, the more effort it takes just to close the gap, let alone get in front of it. Most people underestimate this because the advantages are invisible in the short term. Then they land hard.

What compounding looks like in practice

It does not look like obsessing over the latest model release. It does not look like having the right tools.

It looks like this: you work alongside agents every day and you get better at directing them. You learn which tasks they handle reliably and which need more structure. You build an intuition for what good output looks like and how to course-correct quickly when it drifts. You start designing your workflows differently — not because someone told you to, but because you have enough reps to know what works.

Over months, that becomes instinct. Over a year, it becomes a real edge. The person who has run agents seriously for eighteen months does not just know more than someone who has not. They think differently. They see the gap between what a tool can do and what it is actually doing. They close that gap fast. That is not a skill you can acquire by reading about it.

The window

Here's the optimistic part: the window is still open.

The compounding curve is early. Most organisations are still evaluating. Most knowledge workers are still treating AI as a novelty or a threat rather than a medium to master. The gap between where the early compounders are and where most people are is still closeable. But compound curves have inflection points. At some stage the gap becomes structural. The teams that have been directing agents seriously for two years will carry an intuition advantage that a six-month catch-up sprint cannot replicate.

The question is not whether agents will become central to how knowledge work gets done. That is already settled. The question is when you decide to start compounding.