Every procurement leader I speak with these days has the same question, asked in a hundred different ways: where do we start?

The board wants an AI strategy. The CFO wants savings. The team wants tools that actually work. And somewhere on a roadmap, there are twenty pilots β€” most of them will quietly die, a few will scale, and almost none were chosen with a clear theory of where the value actually sits.

This is not a technology problem. It is an allocation problem.

The work we already know how to do

The interesting thing about procurement is that we already know how to allocate. We allocate spend across categories. We allocate risk across suppliers. We allocate attention across quarterly reviews. The discipline of the function is, at its core, the discipline of where to put the next unit of effort.

And yet when AI enters the conversation, the same leaders who would never accept a 50/50 category split between a commodity and a strategic supplier suddenly lose their instinct for proportion. Everything becomes a pilot. Everything becomes equally important. Or worse β€” the loudest use case wins.

I want to offer a simpler frame. One I have come to rely on in my own work, and one I find myself returning to in conversations with peers.

The 70/20/10 Rule

For every hundred units of attention, budget, or risk appetite you have for AI in procurement, allocate them as follows:

  • 70% to operational automation
  • 20% to analytical augmentation
  • 10% to strategic experimentation

This is not a productivity slogan. It is a portfolio discipline. And like most portfolio disciplines, its value lies less in the exact numbers than in what the ratio forces you to confront: the boring, repetitive, deeply unglamorous work is where most of the value lives β€” and it is where most leaders are underinvesting.

Let me take each tier in turn.

The 70 β€” Operational Automation

This is the layer where procurement actually runs. Purchase orders. Invoice matching. Supplier onboarding. RFX drafting from templates. Contract summarization. Master data cleansing. The thousand small motions that make a category manager’s week.

It is unglamorous work. It is also where AI has its highest certainty of return.

The reason to put seventy percent of your effort here is simple. This is the only tier where the economics are already proven. The time saved is measurable. The error reduction is auditable. The models are mature. And most importantly, the work is bounded enough that the AI does not need to be clever β€” it only needs to be consistent.

In a € multi-million automotive portfolio I once managed, an honest look at the category team’s week revealed a pattern I have since seen everywhere: a substantial share of the time was absorbed by supplier data reconciliation and purchase-order follow-up. Not negotiation. Not strategy. Reconciliation.

The risk profile here is low and the upside compounds. Every hour of operational work automated is an hour returned to the work only humans can do.

The 20 β€” Analytical Augmentation

The next layer is where AI starts to raise the ceiling rather than raise the floor.

This is spend analysis that answers questions in natural language. Market intelligence that surfaces signals before your supplier does. Should-cost models that a category manager can interrogate, not just inherit. Risk radar that flags a geopolitical shift in a tier-three supplier before it becomes a crisis.

The reason to put twenty percent of your effort here β€” and not more β€” is that this layer demands human judgment in the loop. The AI can assemble the evidence. It cannot yet assemble the conviction.

A category manager preparing for a quarterly business review with a strategic supplier needs synthesis, not just retrieval. A buyer negotiating a multi-year framework needs a counterpart, not an oracle.

This is a layer that rewards investment β€” but only once the operational foundation is in place. A team drowning in reconciliation cannot, in practice, use a good should-cost model. They will not have the time. The 20 presupposes the 70.

The risk profile here is moderate, the upside is significant, and the failure mode is usually adoption rather than accuracy. The tools work. People are too busy to use them.

The 10 β€” Strategic Experimentation

And then there is the tenth.

This is the tier where the procurement function looks genuinely different in five years than it does today. Agentic workflows that can run a sourcing event end-to-end. Algorithm-to-algorithm negotiation. Autonomous supplier discovery in markets your team does not yet know. Portfolio-level risk simulation that treats the supply base the way a treasurer treats a currency book.

The reason to cap this at ten percent is not that it is unimportant. It is that the failure rate is high, the benchmarks are few, and the operating-model implications are not yet understood. This is R&D budget, not operating budget. R&D budget has different discipline β€” you should expect most of it to produce learning rather than savings.

I have spent parts of the last year working on exactly this frontier: what commercial negotiation looks like when both sides are running AI agents. The work is fascinating. It is also not ready to scale. Treating it as if it were would be a category error.

The risk profile here is high and the upside is asymmetric. You will lose most of this budget. The one bet that works will reshape the function.

How the rule actually gets violated

The interesting thing about the 70/20/10 rule is not the numbers. It is what happens when organizations violate them.

The most common failure is inversion. A steering committee gets excited about an autonomous negotiation demo, a vendor pitch lands well with the board, and suddenly sixty percent of the budget is flowing into the ten. Meanwhile, the P2P team is still matching invoices by hand.

The second failure is total automation. Everything goes into the seventy. The function gets leaner, tickets close faster, and five years later the procurement organization has not developed a single new capability. Efficiency is not strategy.

The third failure is the split that ignores the foundation entirely β€” roughly half to analytical tools, half to experiments, nothing to the operational layer. This is the profile of teams who believe, often sincerely, that they have β€œsolved” the basics. They have not. They have only stopped looking.

The rule is not prescriptive in its exact ratios. Your number might be 60/30/10, or 80/15/5. But the ordering matters. The operational layer earns the right to the analytical. The analytical layer earns the right to the strategic. Most organizations try to skip steps. Few succeed.

Key Takeaways

  • The 70/20/10 Rule of Procurement AI allocates effort and budget 70% to operational automation, 20% to analytical augmentation, and 10% to strategic experimentation.
  • Operational automation is where AI has its highest certainty of return, because the work is bounded and the models are mature.
  • Analytical augmentation raises the ceiling of human performance, but it requires the operational foundation to be in place first.
  • Strategic experimentation is R&D budget, not operating budget. Most of it should produce learning, not savings.
  • The most common allocation failure is inversion β€” over-investing in the flashy and under-investing in the foundational.
  • Efficiency is not strategy. A function that only automates does not transform.
  • The discipline is not what to build. It is what to build first.

Closing

Procurement has always been a function of proportion. Of knowing where the next euro, the next hour, the next conversation belongs. The suppliers we spend most of our time on are rarely the ones that carry the most strategic weight. The categories that make the board reports are rarely the ones that keep the factory running.

AI does not change this. If anything, it makes the discipline of proportion more important, because the surface area of what is possible has expanded faster than the surface area of what is useful.

The 70/20/10 rule is not a roadmap. It is a posture. A way of asking, before the next pilot or the next procurement of a procurement tool, a simple question: which tier am I funding, and does the tier below it already work?

The leaders who get this right will not have the most impressive AI strategies. They will have the most boring ones. And five years from now, they will have the functions that actually changed.

The link has been copied!