Skip to content
Wey Advisory

Insights

The first three questions to ask before buying any AI tool.

A short, practical piece for owner-led businesses. The three questions that, in our experience, separate a useful purchase from an expensive mistake.

4 April 2026Ricardo Gonzales4 min read

A salesperson is about to demonstrate an AI product to you. They are in your conference room, or on your Zoom, or on the other side of a sponsored coffee at a conference. They have a slide deck. They have a very convincing customer story. They have a short free trial and a quarterly price that is easier to say yes to than no.

Before you sign anything — or, better, before you agree to the first trial — we would ask you to ask three questions. In our experience these three are the ones that reliably separate a useful purchase from a subscription you will be embarrassed about in a year.

1. “Which specific task does this replace or improve?”

Not “what does this do”. Not “what's the vision”. One specific task, currently done by a named person or team in your firm, that will be done differently on Monday if you buy this tool.

If the answer is a vague list of ten things, the tool will probably do none of them well. If the answer is a single concrete task — “drafts the first version of your engagement letters from the client intake form”, “triages your inbox for the partner's first hour of the day”, “turns a dictated site note into a properly formatted report” — you are in a much better place. You have a unit of value you can measure. You have a person you can ask in six weeks “is this helping?” and get a real answer.

This question also flushes out a common tell: when a sales team has to describe the product in terms of “AI-powered” generalities, it is often because the product's actual use-case does not quite survive plain English.

2. “Who on my team will use this, and what will they stop doing?”

Tools that arrive without a clear owner rarely survive the quarter. This is not a new observation — it was true of CRM software in 2005 — but AI tools have a particular version of it, because the learning curve is shallow but not flat. Someone has to want to own the change.

The follow-up is harder and more useful: “what will this person stop doing?” If there is no answer — if the tool is simply being added on top of everyone's existing work — it will, at best, be used for three weeks. Every new tool either replaces a thing, or accelerates a thing, or dies. Saying this out loud at the buying decision is the single cheapest form of risk management you have.

If the answer is “we'll all use it,” gently push back. “All” is almost never true. Pick the one person who will be held to account.

3. “What happens to our data — exactly?”

This question does two things. It makes the salesperson nervous, which is useful. And it exposes the substantive risk that any sensible board of a regulated firm is going to be asked about by its auditor, its insurer, or, at some point, its regulator.

You want precise answers, in writing, to four things. Where is our data processed — which country, whose servers? Is it used to train the vendor's underlying models, and can we turn that off in the contract? Who at the vendor has access to what, under what circumstances? What happens to our data if we cancel — is it deleted, and on what timeline?

A good vendor will give you these answers in a single page. A bad vendor will send you a 34-page security overview that does not quite answer any of them. If it's the second, assume the data story is worse than the marketing suggests and budget accordingly.

A small further note, if your business touches regulated work — legal, financial, clinical, or anything that holds children's data — the default consumer versions of most AI tools are not the right product for you. Insist on the enterprise tier, with a proper data-processing agreement. Insist, in writing, that your data is not used for training. Walk away if they can't.

A fourth question, for the really serious case

If the tool is going to touch something that matters to your business — client-facing work, pricing, hiring, financial reporting — there is a fourth question. “What is the worst thing this tool can plausibly do wrong, and what would it cost us to recover from that?”

For a generative-AI tool the most common answer is: it gets a fact wrong, in writing, and the partner signs it off without catching it. For a predictive model the most common answer is: it quietly mis-ranks in a way nobody notices for nine months. Neither is necessarily fatal. Both are recoverable. But it is worth knowing before you buy, not after.

The spirit of these questions

These questions are meant, above all, to slow the conversation down. AI buying decisions have, for the last two years, been running faster than the rest of the firm's procurement. The tools are cheap, the salespeople are charming, and the fear of being left behind is real. That is exactly the condition in which a small number of good questions, asked slowly and in writing, is worth the most.

You do not need to be technical to ask these three. You need only the patience to refuse to sign anything until the answers are clear, specific and plausible. That is not a technology skill. It is a partnership skill. It is the same skill that has kept good owner-led firms out of bad procurement decisions for a hundred and fifty years.

Ask these three, and then make your decision.

Thinking about AI for your business or family office?

Book a 30-minute call. We will listen, and tell you honestly whether we can help.

Ricardo is the founder of Wey Advisory. He writes here about AI for owner-led businesses and private clients in Surrey.

Start with a conversation.

Thirty minutes. At our cost. No obligation.