The Bottleneck Is Not the Model. It Is the Interaction.

The next big shift in AI is not raw intelligence. Research from Stanford, McKinsey, and Deloitte points to human-AI interaction design as the defining frontier of 2026.

avatar

Tim Givois

March 06, 20265 min.
AIProductivityClaudeFuture

Everyone is still betting on smarter models. More parameters, better benchmarks, faster inference. I get it. That is a legible bet. But I think it is the wrong one for the next two years.

The ceiling most teams are hitting right now is not intelligence. It is the interaction.

Stanford HAI put out their 2026 predictions late last year and the thing that stuck with me was not a technical forecast. It was this idea from professor Diyi Yang that AI should support long-term human development, not short-term engagement, and that this has to be built into the design from the start rather than treated as an alignment problem you solve later. That is a fundamentally different frame than "make the model smarter and see what happens."

The research backs this up in a way that is hard to dismiss. McKinsey surveyed close to 2,000 executives across 105 countries for their 2025 State of AI report and asked what actually separates organizations getting real returns from those stuck in pilot purgatory. The answer was not model selection. It was whether they had defined, intentional processes for when humans review AI outputs and when they do not. The organizations doing this well were three times more likely to redesign their workflows around AI rather than just layer it on top of what already existed.

Deloitte ran a similar study across 3,235 leaders in 24 countries. Two thirds reported productivity gains. Only 30 percent had actually redesigned the processes underneath those gains. So a huge chunk of organizations are getting some value, but leaving a lot on the table, not because the model is bad but because nobody thought carefully about the human half of the equation. Deloitte is now tracking "human AI interaction design" as its own discipline, with new roles like AI operations managers and interaction specialists starting to show up at companies that are ahead of the curve.

My read on all of this: the leverage is not in the model. It is in the handoff.

Which is why I think the Claude versus ChatGPT conversation keeps missing the point. Most takes anchor on hallucination rates or memory or which one got a benchmark right. Those matter at the margins. But the actual difference, the one I notice every day, shows up when you give either system something genuinely complex. A task with multiple moving parts, ambiguous constraints, context that builds across a long conversation.

Claude holds that context. Not just technically, in the sense of a larger window, but in the way it actually uses it. It tracks what you are trying to accomplish rather than optimizing for what you literally typed. It pushes back when your approach has a problem. It does not give you a confident-sounding answer when the honest answer is that the question needs to be reframed first. That is Anthropic's Constitutional AI approach working the way it is supposed to: helpfulness and honesty baked in at the model level rather than filtered on top.

The gap between a tool that does what you asked and a tool that works with you toward what you actually need is enormous in practice. Most people have not felt that gap yet because they are using AI for tasks small enough that the difference does not show up. But the teams running serious workflows on this stuff feel it immediately.

The Deloitte data has one more finding worth sitting with. Organizations that invest in developing human capabilities alongside AI adoption are nearly twice as likely to report meaningful business results. Not twice as likely to get impressive demos. Twice as likely to see results. McKinsey calls this "superagency." The idea is not that AI replaces what people do but that it amplifies it, and you only get that amplification if the human side of the equation is treated as a variable worth designing around.

I think the organizations that figure this out in the next 12 to 24 months are going to build a lead that is genuinely hard to close. The models will keep improving for everyone. But the ability to design good interactions, to think clearly about what the human should own versus what the AI should handle, is its own compounding skill. And most teams are not investing in it.

Stanford's prediction for 2026 is that the hype era gives way to an evaluation era. Questions shift from "can AI do this?" to "does it actually produce better outcomes?" I think that is right. And I think teams that have been treating interaction design as a real discipline will look very different from those that were just waiting for a smarter model.

If this is the kind of thing you think about, stick around. AI and technology are what I keep coming back to — how it actually works, what it means for the people building with it, and where things are heading. I write about it here as often as I have something worth saying.


Stanford HAI 2026 predictions · McKinsey State of AI 2025 · Deloitte Enterprise AI 2026