← All posts

What an AI consulting engagement actually looks like

9 min read

What actually happens during an AI consulting engagement: the audit, the recommendations, the deliverables. No mystery, no fluff, just the real process.

AI strategy audit for businesses: identify true automation ROI and avoid costly, unproductive AI development.

Quick answer

An AI consulting engagement is 3-4 weeks of structured assessment. I audit your operations, identify automation opportunities, and deliver a prioritised roadmap with cost estimates. The main output is clarity: which projects are worth doing, which aren't, and what order to tackle them in.

When I tell people I do AI consulting, the most common reaction is a polite nod followed by "but what do you actually do?" Fair question. "Consulting" is one of those words that can mean anything from "we'll have some meetings" to "I'll rebuild your entire operation."

Here's exactly what happens when a business hires me for an AI consulting engagement. Week by week, deliverable by deliverable. No mystery.

Week 1: The operations audit

The first week is all observation and questions. I'm not looking at technology yet. I'm looking at how the business actually works.

Day 1-2: Shadowing and interviews

I spend time with the people who do the operational work. Not the managers who describe what should happen, but the people who actually do it. The accounts team processing invoices. The support agents handling tickets. The ops coordinator juggling spreadsheets.

I watch them work. I ask questions like:

  • "Walk me through what you do with this when it arrives in your inbox."
  • "What happens when this number doesn't match?"
  • "How often does this go wrong, and what do you do when it does?"
  • "Which part of this takes the longest?"
  • "What would you automate first if you could?"

That last question is gold. The people doing the work almost always know where the biggest bottlenecks are. They've been living with them every day.

Day 3-4: Process mapping

I document every workflow I've observed. Step by step, including the decision points, exceptions, and workarounds.

Most processes have a core happy path that's straightforward and 3-5 exception paths that are messy. The happy path is usually easy to automate. The exceptions are where the money and complexity live.

For each workflow, I capture:

  • Volume: how many times per day/week/month
  • Time per instance: how long each one takes
  • Error rate: how often something goes wrong
  • Cost of errors: what happens when it does go wrong
  • Data sources: where the inputs come from and what format they're in
  • Decision points: where human judgment is required vs where it's just rule-following

Day 5: Data assessment

On the last day of the audit week, I look at the data. Not just whether it exists, but whether it's good enough to work with.

I ask for exports from whatever systems they use: CRM, accounting software, support platform, spreadsheets, email archives. Then I check:

  • Volume: do they have enough historical data? (500+ examples is my minimum)
  • Quality: is it consistent? Are fields filled in reliably?
  • Format: is it structured (database, CSV) or unstructured (PDFs, emails)?
  • Accessibility: can I get to it programmatically, or is it locked in a system with no API?

Note

The data assessment is usually where the biggest surprises happen. A business might think they have great data because they've been tracking things for years. Then I look at the export and find 40% of a key field is blank, categories are inconsistent, and there are 6 different ways someone has typed "invoice."

Week 2-3: Analysis and scoring

This is where I do the actual thinking. No meetings, minimal interruptions. I need focused time to score each opportunity and build realistic estimates.

Opportunity scoring

Every workflow I've documented gets scored on 4 dimensions:

Automability (1-10): How feasible is it to automate this with current technology? A process that follows clear rules with structured data scores 8-9. One that requires nuanced judgment with unstructured inputs scores 3-4.

Impact (1-10): How much time/money does this save? A process consuming 20 hours/week with a 5% error rate that costs £200 per error scores high. A 2-hour/week task with no error cost scores low.

Data readiness (1-10): Is the data available, clean, and sufficient? Existing structured data with 1,000+ examples scores 9. "We'd need to start tracking this" scores 2.

Risk (1-10, inverted): What happens if the automation gets it wrong? Low-risk (human reviews output before action) scores 9. High-risk (automated payments, customer-facing decisions) scores 3-4.

The combined score gives a clear ranking. Most businesses end up with 1-2 opportunities scoring 7+ (build these first), 2-3 scoring 4-6 (worth revisiting later), and 3-5 scoring below 4 (don't bother).

Cost estimation

For each viable opportunity, I estimate:

  • Build cost: what it'll take to develop the solution
  • Timeline: realistic weeks to production, including data prep
  • Ongoing cost: annual maintenance, monitoring, and retraining
  • Expected savings: hours saved per week, errors avoided, revenue enabled
  • Payback period: months until the investment pays for itself

I use ranges, not precise figures. "£5,000-£8,000 to build, paying for itself in 3-5 months" is honest. "Exactly £6,432 with 4.2 month ROI" is pretending to know things I don't.

Week 4: The roadmap

The final week produces the deliverable: a document I call the AI Roadmap. It's typically 15-25 pages, and it contains everything the business needs to make informed decisions.

What's in the roadmap

Executive summary (1 page): The top 3 findings. What to build first, what to skip, and the expected ROI of the recommended projects.

Current state assessment (3-5 pages): The documented workflows, with the bottlenecks and waste highlighted. This is valuable even if you never build any automation because it usually reveals process improvements that cost nothing to implement.

Opportunity matrix (2-3 pages): Every identified opportunity scored on the 4 dimensions, ranked by overall viability. Clear visual showing what's in the "build now" zone vs "revisit later" vs "don't bother."

Recommended projects (5-10 pages): Detailed write-ups for each recommended project. What it does, how it works, what technology it uses, what data it needs, what the build looks like, and the expected financial outcome.

Implementation timeline (1-2 pages): A phased plan. Usually 2-3 projects over 6-12 months, not a massive transformation. Start with the highest-confidence, highest-ROI project. Prove value. Then expand.

Data action plan (1-2 pages): Specific steps to close any data gaps. If a promising opportunity scored low on data readiness, this section outlines exactly what to collect, how to collect it, and how long to collect it before the project becomes viable.

The presentation

I walk through the roadmap in person (or video call). Takes about 90 minutes. I expect questions, pushback, and debate. That's the point. The roadmap is a starting point for decisions, not a decree.

Usually the conversation goes one of three ways:

  1. "Let's start with project 1." Clear winner, obvious ROI. Most common outcome.
  2. "We need to fix our data first." The audit revealed gaps that need addressing before any AI work. Still valuable because now they know exactly what to fix.
  3. "This confirmed what we suspected, but we're not ready yet." Less common, but it happens. And it saves them from spending £15,000+ on a project that would have stalled.

What most people don't expect

The biggest value in the engagement isn't usually the recommendations. It's the process documentation. Most businesses have never had someone sit down and map out how their operations actually work. The documented workflows, with their decision points and exception paths, are useful regardless of AI.

I've had clients take the process maps from week 1 and use them to:

  • Onboard new staff faster ("here's exactly how this process works")
  • Identify manual steps that could be eliminated with no technology at all
  • Realise two teams were doing the same thing differently and standardise

The second thing people don't expect: I tell them what not to build. Most businesses come to me with 5-6 ideas for AI automation. Typically, 1-2 of them are genuinely good opportunities. The rest are either technically infeasible with current data, not worth the investment at current volume, or better solved with simpler tools.

Telling a client "don't build this, use Zapier instead" isn't giving up a sale. It's building trust. And it usually means they come back for the projects that are genuinely worth doing.

How much does this cost?

A full engagement runs £3,000-£6,000 depending on business complexity. A business with 3 core processes and clean data is at the lower end. One with 10+ workflows across multiple departments is at the higher end.

Is it worth it? The maths is simple. If the engagement prevents one bad £10,000 project, it's paid for itself twice over. If it identifies a good £5,000 project that saves £15,000/year, the ROI is clear within months.

Key Takeaways

  • Week 1 is observation: shadowing the team, mapping processes, assessing data quality.
  • Weeks 2-3 are analysis: scoring every opportunity on feasibility, impact, data readiness, and risk.
  • Week 4 delivers the roadmap: ranked projects, cost estimates, timelines, and a data action plan.
  • The highest value is often what you're told not to build. Avoiding one bad project covers the consulting cost.
  • Process documentation from the audit is valuable on its own, regardless of whether AI projects follow.

If you're considering AI for your business but aren't sure where to start, that's exactly what this engagement is for. I'll look at your operations, your data, and your goals, and tell you honestly what's worth building and what isn't. Get in touch and I'll outline what the process would look like for your specific situation.

Related reading: