Skip to main content

Most AI Debates Compare the Wrong Things

The debate about which AI tool to use usually starts in the wrong place.

Copilot vs ChatGPT vs Claude. Which scores higher on benchmarks. Which writes better code, summarises more accurately, reasons through harder problems. These are real differences, and they matter at the margins. But for most organisations, they're not the actual question.

The actual question is: can the tool reach your data?

We chose Copilot. Not because it's the strongest model — it isn't, and plenty of people in the industry will tell you so. We chose it because we're a Microsoft shop with years of institutional knowledge locked in PDFs, PowerPoints, emails, and SharePoint. Copilot sees all of that by default. The integration isn't a project you have to plan; it's just there.

You could connect ChatGPT or Claude to SharePoint. But that's not a default — it's a configuration decision, a security review, a custom integration, and ongoing maintenance. For an organisation that isn't yet digitally mature, that overhead is real.

This is the part that gets glossed over in most AI discussions. Model capability and data access are treated as the same dimension when they're not. You can have the most capable model in the world sitting in front of a blank context window, and it will give you confident, generic answers. A slightly weaker model with access to your meeting transcripts, your internal documents, your product history — that model will give you something you can actually use.

Access is the multiplier. Capability is the ceiling.

Meeting transcriptions alone changed how we work. Searching across material that would have taken half a morning now takes seconds. Not because Copilot is particularly clever — because it can see what exists.

If we were a digitally mature organisation with clean APIs, well-structured data pipelines, and strong internal tooling, the decision might have gone differently. I'd have had more flexibility to choose on model quality alone. But we're not there yet, and choosing a tool that works with the reality you have is a more honest approach than picking the theoretically superior option that requires six months of integration work before it delivers anything.

This isn't an argument for Copilot specifically. It's an argument for asking a different question before you make the decision.

Most organisations benchmarking AI tools are comparing capabilities. They should be mapping their data: where it lives, what format it's in, how accessible it is to an external tool, what the integration cost actually looks like before any value flows.

The tool that wins that audit won't always be the strongest. It'll be the one that can see your context with the least friction.

When you're trying to choose, the question isn't which model is best. It's which one can actually reach what you know.

Chances are, it's not the model that's blocking you.

It's access.