Skip to content
Wey Advisory

Insights

What we actually mean when we say AI (and what we don't).

Three different technologies, often lumped into one word. A short guide for the intelligent non-technical reader — with three Surrey-business examples.

18 March 2026Ricardo Gonzales4 min read

The word “AI” has become, over the last three years, one of the most useful and also one of the most misleading words in the English language. It is useful because it reaches across a boardroom table without anyone needing a glossary. It is misleading because it is three quite different technologies, lashed together, and the differences between them decide whether a tool will earn its keep in your business or waste six months of your team's goodwill.

Before any tool choice, it is worth slowing down for ten minutes and being honest about which of the three you actually mean.

1. Generative AI

This is the one everyone has in mind. ChatGPT, Microsoft Copilot, Google Gemini, Anthropic's Claude — systems that produce text, images, audio or code in response to a prompt. They are built on large language models, trained on enormous bodies of text, and they are extraordinarily good at a narrow set of tasks: drafting, summarising, translating, rewriting, explaining, converting one kind of document into another.

They are also confidently wrong in a predictable way. Give one a fact it does not know and it will often invent a plausible-sounding substitute. This is not a bug that will be patched next year. It is how the technology works. So the rule is simple: generative AI belongs in places where a human does the final edit, and where the cost of being occasionally wrong is a polite apology rather than a regulatory fine.

For a Godalming solicitor drafting a dozen near-identical letters a day to confirm receipt of instructions, a properly configured generative AI tool is straightforwardly useful. The partner still signs the letter. The junior stops losing Tuesday mornings.

2. Predictive models

These are the quieter, older cousins. Give a predictive model a lot of past data — weekly sales, claims, customer records, inventory — and it will tell you what is most likely to happen next. Which accounts are most likely to churn. Which invoices are most likely to be paid late. Which listings will sell inside thirty days and which will sit on the market for four months.

Predictive models do not generate new content. They produce a probability — usefully precise, rarely showy. They have been in serious use at banks and insurers for thirty years, and they have quietly kept getting better. They do not hallucinate, because they are not asked to write. They are asked to rank, score, or estimate.

A Cobham estate agency, with five years of its own transaction data, can sensibly use a predictive model to tell negotiators which leads are most likely to convert — and to sense-check the vendor's asking price on a new instruction. That is genuinely useful. It is also quiet — no chat interface, no dashboard, just a better-informed phone call on Monday morning.

3. Automation (the part nobody calls AI, but usually should)

A huge amount of what is sold as “AI” is not AI at all. It is plain automation — one system triggering another. When a customer books through a web form and a confirmation email goes out, a calendar invite is generated and a line lands in a CRM, no intelligence has been exercised. Rules have been. And that is fine: the rules are clear, auditable, and predictable.

The reason to separate automation from the first two is that it is usually the fastest, cheapest and most durable way to get value. It is also the least fashionable, which means most consultants won't sell it to you. We will. We often recommend that businesses do six months of unglamorous plumbing — tidying data, wiring tools together, writing down the actual process — before reaching for a generative-AI tool at all.

A Guildford architecture practice found that its billable hours were being quietly eaten by time-keeping, invoicing and the chase on overdue invoices. Good automation (no AI at all) recovered about six hours a week per principal. A generative-AI tool layered on top, later, added another three. The order mattered: the automation made the AI step sensible, because by then the data was clean.

Why this matters, practically

The reason to insist on the distinction is that each of the three fails differently, costs differently, and is suited to a different kind of problem.

  • Generative AI fails by being confidently wrong. It is cheap to trial, expensive to trust. Fit for drafting, summarising, and first-pass work under human review.
  • Predictive models fail by being wrong at the edges. They cost more to build, because they need your data. They are fit for estimates, rankings, and triage.
  • Automation fails by being brittle when a process changes. It is by far the cheapest of the three to run, and it is fit for any repeatable work that currently lives in a human's inbox.

A good adviser — and most good adverse — will ask which of the three you actually need before recommending a tool. A bad one will say “we'll put AI on that” and assume generative by default.

The question worth asking before the next meeting

If someone in your business is bringing you an AI idea this quarter, ask them which of the three it is. Ask them to be specific. If they can't tell you, that is information — it means the idea has not been thought through all the way to the thing it will actually do in the world.

The three technologies are all, at this point, genuinely useful. None of them is magical. And knowing the difference between them is, most of the time, ninety per cent of the decision.

Thinking about AI for your business or family office?

Book a 30-minute call. We will listen, and tell you honestly whether we can help.

Ricardo is the founder of Wey Advisory. He writes here about AI for owner-led businesses and private clients in Surrey.

Start with a conversation.

Thirty minutes. At our cost. No obligation.