From Pilot to Profit: The 90-Day AI Implementation
Most AI projects die in a fog of good intentions.
The owner commits to “doing AI properly.” The team nods. Meetings happen. Tools get signed up for. Three months later, someone asks what progress has been made, and nobody’s quite sure.
This doesn’t happen because AI is hard. It happens because nobody set a plan.
Here’s one. Ninety days. From nothing to a working, measurable AI implementation that pays for itself.
Each week has a job. Each month has a goal. At the end, you know exactly what you’ve built, how much it saves, and what to do next.
Print this. Stick it on the wall. Follow it.
Why 90 days specifically
Forbes and most credible AI consultants recommend a 90-day pilot window for a reason.
Shorter than that and you haven’t given the tool time to prove its value. Longer than that and the urgency evaporates. Somebody gets sick. Something else becomes a priority. The project drifts.
Ninety days is tight enough to stay focused and long enough to produce real results. It’s three months. It’s a quarter. It’s the natural rhythm of business planning.
The businesses that work to a 90-day plan ship working automation. The ones that don’t, don’t.
Month 1: Problem identification and supervised pilot
Week 1: Find the problem
Don’t start with a tool. Start with a problem.
List every task you or your team do at least three times a week. Don’t edit. Just list.
Now score each one on two dimensions:
Time cost: How many hours a week across the team?
Enjoyment: How much do people hate doing it?
The tasks that score high on both are your candidates. Pick one. Write it down. That’s the pilot target.
Week 2: Design the supervised pilot
Now design the test. Write down:
The exact task the AI will attempt
The tool you’ll use (your generalist AI or a specialist like Dext)
The metric you’ll measure (time saved, accuracy, throughput)
The baseline — how long does it take now?
Target: get a supervised pilot running by end of week two.
Week 3: Run the pilot
Every time the task comes up this week, do it twice. Once manually. Once with the AI. Log:
Time taken (manual vs AI)
Quality of output
Corrections needed
You’ll generate real data. Not opinions. Numbers.
Week 4: Decide
Review the week’s logs. One of three things will be true:
AI beat the manual process cleanly → scale it in month two
AI matched the manual with editing → refine the prompt or setup, try again next month
AI couldn’t do the task → pick a different task, start month one again
Whatever the outcome, you’ve learned. That’s the point.
By end of month one, you’ve proven one small thing works. One.
Month 2: Integration and scaling
Week 5: Connect the tools
If month one produced a working workflow in isolation, now you connect it to your existing systems.
This week:
Wire the AI workflow into your CRM, email, or project tool
Use Zapier or native integrations where possible
Test every connection end-to-end
The goal: the workflow runs without you opening a separate app. It happens in the background.
Week 6: Name the champion and train the team
If you haven’t named an AI champion yet, do it now. Give them a four-hour-a-week remit (see my earlier newsletter for the full job description).
Run a short team training session. Fifteen minutes. Show them:
What the workflow does
How to check it’s working
What to do if it breaks
How to give feedback
Make this a habit, not a one-off.
Week 7: Build the second workflow
By now, the first workflow is solid. You know how it behaves. Trust is building.
Time to start the second.
Same process as month one — problem, pilot, scale. But shorter. You’ve built the muscle. Each new workflow takes less time than the first.
Week 8: Document everything
Take a day to write up what you’ve built. Short SOP. Include:
What each workflow does
How it’s connected
Who owns it
How to troubleshoot
Current savings metric
One page per workflow. Stored in a shared location. This is the team brain.
By end of month two, you’ve got two live workflows, an owner, and documentation. You’re ahead of most SMEs.
Month 3: Measurement and iteration
Week 9: Lock the metrics
Pull together the data you’ve been logging.
Hours saved per week (per workflow and total)
Cost (tool subscriptions and integration)
Team adoption (who uses what, how often)
Error rate (where human correction was needed)
Lay this out on one page. This is your AI scorecard.
Week 10: Iterate based on data
Look at the scorecard. Where are the biggest gaps?
If error rate is high: tune the prompts, add review steps
If adoption is low: more training, simpler interfaces, better defaults
If savings are lower than expected: either fix the workflow or park it
Spend the week making focused improvements. Don’t try to improve everything.
Week 11: Build the third workflow
Start the third automation. By now you’re doing this in parallel with your other work, not as a big project.
The first workflow took a month. The second took two weeks. The third should take one.
That’s how implementation accelerates. Not because AI gets better. Because you get better at shipping.
Week 12: The 90-day review
Pull everything together. Hold a simple review meeting with yourself, the champion, and anyone else involved.
Questions to answer:
What did we build? (the three workflows, in detail)
What does it save? (hours and pounds, total)
What did we learn? (surprises, good and bad)
What’s next? (the quarter ahead)
Document the answers. This becomes the case study you reference when someone asks, “Is AI really worth it for us?”
Spoiler: the answer is yes. Now you have proof.
The metric your finance director cares about
All through the 90 days, keep this simple calculation visible.
Monthly AI investment:
Tool subscriptions: £X
Champion’s time (4 hours/week × hourly rate): £Y
Training and setup (amortised): £Z
Total monthly cost: £(X+Y+Z)
Monthly return:
Hours saved per week × 4 weeks × average hourly rate
Revenue generated (from better-responded leads, faster proposals)
Cost avoided (not hiring, not outsourcing, not missing opportunities)
Total monthly value: £W
ROI: (W - (X+Y+Z)) / (X+Y+Z)
After 90 days, this ratio should be well over 3:1. Often 5-10:1 for well-built workflows.
If it isn’t, something’s wrong — most likely the workflows are too clever, the adoption is too low, or the measurement is too fuzzy. Fix whichever applies.
What 90 days looks like, realistically
Let me paint the picture.
Day 1: You list ten repetitive tasks. You pick lead follow-up. You decide to test Claude + Zapier.
Day 14: You’ve got a basic workflow running. Every new lead gets an AI-drafted reply that you review and send. It’s rough but working.
Day 30: You’ve logged 26 AI-drafted replies. You sent 22 with minor edits. You saved roughly 7 hours. You’re convinced.
Day 45: You’ve wired the workflow into your CRM. Replies happen automatically when leads come in. You just approve or tweak. Your champion has started building the second workflow — invoice processing.
Day 60: Invoice processing is live. Your accounting pile has disappeared. The champion has documented both workflows. The team has watched a 15-minute training video.
Day 75: Third workflow — meeting notes — is in pilot. Your CRM now automatically gets filled with summaries of every client meeting. Nothing gets forgotten.
Day 90: All three workflows running. Scorecard shows 11 hours saved per week. Total monthly cost: £180. Value generated: £1,600 a month. ROI: 9x.
You can now go back to the drawing board and pick the next three.
This isn’t fantasy. This is what a careful 90-day plan delivers in most SMEs.
What to avoid across the 90 days
Three common failure modes.
Scope creep. Week two, someone suggests adding a second workflow. Then a third. Next thing you know, you’ve started five and finished none. Stick to one per month. Finish each one before starting the next.
Tool hopping. You try Claude. You read that ChatGPT just launched a new feature. You switch. Halfway through, Gemini announces something shiny. You switch again. End result: you know three tools superficially, none of them deeply. Pick one at the start. Stick with it for the full 90 days.
Skipping the scorecard. The people I see fail hardest are the ones who “feel” AI is helping but never measure. The scorecard is the thing that protects your budget when someone senior asks “what does this actually do?”
The practical bit for this week
Print the 90-day plan. Put it on the wall. Start with day one.
Today: list every repetitive task
Tomorrow: pick the one with highest time cost and highest team resentment
Wednesday: pick your tool and write down the metric
Thursday: design the supervised pilot
Friday: start running it
That’s week one. Follow the plan from there.
If you stall in week two, your first pilot is probably too ambitious. Pick something smaller. Repeat.
If you stall in month two, your champion doesn’t have enough time. Protect their four hours.
If you stall in month three, your measurement is fuzzy. Tighten the scorecard.
Everything else sorts itself out.
The bottom line
AI implementation isn’t complicated. It’s just structured.
One month to prove one thing works. One month to scale and document. One month to measure and build more.
Ninety days. Three workflows. Ten hours a week saved. Documented proof of ROI.
This is achievable for any UK SME with an owner willing to commit a few hours a week. No agency. No consultant. No expensive platform.
Most businesses won’t do it. They’ll talk about AI all year. They’ll sign up for tools and forget to use them. They’ll end 2026 in the same place they started.
You won’t. Because you’ve got a plan now. And plans that fit on one page, with weekly actions and a clear metric, tend to get done.
Start this week. Day one is listing ten repetitive tasks. That’s it.
In ninety days, you’ll have something most of your competitors will spend the next year trying to build.


