I've sat in plenty of board meetings where AI was framed as the answer to almost everything. Customer service, recruitment, sales follow-ups, finance approvals, the lot. Some of it was sensible. Some of it should have made everyone in the room nervous. Most of the time, nobody asked the awkward question: when this thing makes a decision and it's the wrong one, who's actually carrying it?
For a small business, the question is the same. It just looks different. It isn't an automated loan approval or an AI hiring system. It's the receptionist who's started pasting customer details into ChatGPT to write quotes. It's the auto-reply that sounds great until it gives somebody the wrong opening hours and they drive thirty miles for nothing. It's the "smart" inbox assistant that quietly decides which emails are urgent and drops the one that mattered.
You don't need a governance framework. You need to think clearly about a few things, and put light-touch rules around them before something goes wrong rather than after.
somebody has to own it
If an AI tool is doing real work in your business, it needs an owner. Not "I bought it, it runs itself". A person who knows what it's doing, when it last got it wrong, what data it sees, and what they'd do if it stopped working tomorrow morning. In a business of three people that owner is probably you. Pretending otherwise is how things go wrong slowly and quietly for months.
It doesn't have to be formal. It just has to be clear. Pick the person, write it down, move on.
decide what it does on its own and what it doesn't
Some jobs you're happy to let AI do without a human looking. Some you want a human to check before it goes out the door. It's worth deciding which is which up front, before you turn the thing on, not after.
A rule of thumb that works: if the cost of getting it wrong is just embarrassment, automate it. If the cost is a customer, a refund, a complaint, a legal letter, or a bad review on Google, put a human in the loop. The clue is the size of the apology you'd have to make.
don't feed it things you can't account for
Every prompt you put into a free AI tool is, at minimum, leaving your business and going somewhere you don't control. Sometimes it's used for training the next version. Sometimes it's stored. Sometimes it's logged in places you'd struggle to explain to a customer who asked the right question.
If you're pasting customer names, addresses, financial details, contracts, medical notes, or anything you wouldn't put on a postcard, you've quietly made a privacy decision on behalf of people who didn't agree to it. That's the kind of thing that ends up in front of the ICO if a customer ever complains.
The fix isn't "stop using AI". The fix is: use the right tier of the right tool, paid where it matters, with the data settings actually checked, and a clear list of what is and isn't allowed to go in the prompt box.
keep a record of what it did
This sounds like a big-business idea, but the logic works at any scale. If something goes wrong in three months and you need to work out why, you want to be able to look back at what the AI was told, what it produced, and who acted on it. Most tools will let you turn this on. Hardly anyone bothers, and then they're surprised when they can't explain themselves.
A simple folder of saved chats, a plain-text log, a paid plan with history switched on. None of it is hard. It just has to exist before you need it.
watch for drift
This is the one nobody warns you about. An AI tool that worked perfectly in March can be making subtle mistakes by September. Sometimes the model behind it has been updated by the provider. Sometimes the way you're using it has crept into new territory. Sometimes the data feeding it has shifted and nobody noticed.
You don't need formal monitoring. You need to look every so often, with a critical eye, and ask: "Is this still doing what I think it is?". That five-minute review every month is worth more than any policy document on a shelf.
The instinct that matters at every scale is the same. Know who's responsible. Decide what gets checked. Keep a record. Watch for things going quietly wrong. Everything else is paperwork.
what good looks like in a small business
- One named person owns each AI tool that's doing real work
- A short, written rule about what data goes in and what doesn't
- A clear list of which tasks are fully automated and which need a human check
- A monthly five-minute look at what the AI's actually been doing
- A way to switch it off and carry on without it, if you ever have to
That's it. No 40-page framework, no committee, no consultants in expensive suits. Just enough structure that you're in charge of the AI, not the other way round.
why I think about this
I spent years sitting in meetings where the language was risk registers, model lifecycle management, explainability standards, audit committees. All of that matters at scale. None of it is sensible advice for a business of five people. But the underlying instincts, knowing who's responsible, deciding what gets checked, keeping a record, watching for things going quietly wrong, those scale all the way down.
The difference now is that I have to do it at my speed or fail. No governance forum to wait on. No 60-page policy to draft. Just: what's the smallest thing we can put in place this week that means you're not flying blind? That's the version of AI governance a small business actually needs, and it's the one I tend to write for people when I'm asked.