Most AI consultants will take your brief, run it through their standard template, and deliver a polished demo that works perfectly — for the problem they imagined you had.
That's not how I work.
The brief always lies
Not intentionally. But when you describe your workflow in a 45-minute call, you describe the version you think exists. The version that's in your process document. The version that made sense when you wrote it down three years ago.
What actually happens on the ground is different. The spreadsheet that "shouldn't exist but everyone uses." The workaround that became load-bearing six months ago. The one person on the team who knows how the system actually works and is quietly terrified of going on holiday.
AI agents built against the imagined workflow fail against the real one.
What I see when I'm there
When I visit, I watch. I ask dumb questions. I sit with whoever does the thing that's supposed to be automated and watch them do it.
What I find is usually three things:
-
The real inputs. Not what you described — what actually arrives. PDFs that are scanned sideways. Emails with attachments in five different formats. Verbal handoffs that never get written down.
-
The real decision points. Where does a human have to make a call? Where does context matter that can't be captured in a field? These are the places where naive automation breaks.
-
The real stakes. Which parts of the workflow are fine to get slightly wrong, and which parts, if an AI makes an error, cost you a customer or a compliance audit?
The outcome
After a few days on-site I have something no brief can give me: an accurate mental model of what your operation actually is.
That's what I design against.
The AI agents I build after a visit are boring in the best way — they work, they handle edge cases, and the people who use them don't hate them.
If you want to explore what this could look like for your business, get in touch. You cover getting me there. I'll handle the rest.