A 90‑Day Framework for Your First AI Automation Pilot (Part 3/3): Days 61–90

By this point, you should have more than a slide deck and good intentions.

From Part 1 (Days 1–30), you have:

  • A clear pilot charter with outcome, scope, metrics, and guardrails
  • A detailed “before” process map with time, volume, and pain points
  • A minimum viable automation design in plain language

From Part 2 (Days 31–60), you have:

  • A working, “read‑only” version of the workflow wired into real systems
  • AI-generated summaries and suggested actions in front of real users
  • Early data and feedback on where the pilot is helping (and where it is not)

The final 30 days are about turning this from “interesting experiment” into “real workflow with real impact.” The goal is simple: by Day 90, you want to be able to answer three questions with confidence:

  1. Did this pilot meaningfully improve the outcome we cared about?
  2. Do people actually use and trust it?
  3. Is it worth scaling, iterating, or shelving?

We’ll break Days 61–90 into two moves:

  1. Weeks 9–10: Turn on limited automation with clear guardrails
  2. Weeks 11–12: Measure impact, tell the story, and decide what’s next

Weeks 9–10: Turn on limited automation with clear guardrails

So far, your AI and automation have lived in “assistant” mode: they can see, summarise, and suggest, but humans still press the final buttons. In Weeks 9–10, you selectively flip some of those suggestions into real automated actions.

The key word here is selectively.

Step 1: Choose the safest, highest-confidence steps to automate

Go back to what you learned in Weeks 7–8:

  • Which suggestions are almost always accepted with little or no editing?
  • Which steps are purely mechanical, with clear rules and low risk?
  • Where would automation simply remove copy‑paste and clicks, not judgment?

Typically, good candidates include:

  • Creating or updating tasks or tickets based on a standard pattern
  • Tagging or categorising items where the model’s accuracy is consistently high
  • Posting summaries or digests into shared channels in a standard format
  • Populating non-critical fields in your systems that humans can easily correct

Mark those as your “Phase 1 automation” steps. Everything else stays in suggested‑action mode for now.

Step 2: Define and document your guardrails

Before you turn anything on, be explicit about what the automation is not allowed to do in this phase. For example:

  • No changes to payment details, user permissions, or contractual terms
  • No final approvals for high-risk or high-value decisions
  • No deletion of records or irreversible operations

Write this down in plain language and share it with your pilot group. You want everyone to understand:

  • Which actions are now automated
  • Where humans remain firmly in the loop
  • How to pause or roll back if something looks off

If you have risk, legal, or compliance stakeholders, this is a good moment to brief them and confirm they’re comfortable with the boundaries you’ve set.

Step 3: Turn on automation for a limited subset

Start small, even within your pilot. You might:

  • Automate only for a specific segment (e.g., “invoices under $5,000 from known vendors” or “internal tickets, not customer-facing ones”)
  • Limit automation to one team or region
  • Turn it on during certain hours when your team is available to monitor

The idea is to keep the blast radius small while you build confidence. Announce clearly to your pilot users when automation is live and what they should watch for.

Step 4: Monitor closely and keep humans “on the loop”

For the first 1–2 weeks of automation, you want heightened visibility. Make it easy to see:

  • What the automation did (with a clear log or audit trail)
  • Where humans intervened or corrected it
  • Any unusual patterns (e.g., sudden spikes in certain actions)

Encourage your team to:

  • Flag anything that feels wrong or surprising
  • Share examples where automation saved them time
  • Suggest small tweaks that would make the workflow even smoother

If you see repeated corrections in a particular area, consider rolling that step back to suggested‑action mode and refining prompts or rules before re‑enabling.

Weeks 11–12: Measure impact, tell the story, and decide what’s next

The last two weeks are about stepping back from the day‑to‑day and looking at the pilot as a whole. This is when you turn your notes, logs, and anecdotes into a clear, simple narrative.

Step 1: Compare against your original baselines

Pull out the pilot charter from Part 1. Look at the baselines you captured:

  • Volume per week
  • Time per item or cycle time
  • Error rates or rework
  • Number of handoffs or manual steps

Now, gather the same metrics for the most recent 2–4 weeks of the pilot, ideally including the period with limited automation turned on.

You’re looking for directional answers:

  • Are we handling more work with the same or fewer people?
  • Has the cycle time meaningfully improved?
  • Are common errors or rework down?
  • Are we seeing fewer “dropped balls” or last‑minute scrambles?

You don’t need perfect precision, but you do want enough evidence to say, “This is working,” “This isn’t,” or “It’s promising but needs refinement.”

Step 2: Capture qualitative feedback from users

Numbers matter, but so does how people feel about the new workflow.

Run a quick, focused pulse check with your pilot group:

  • On a scale of 1–5, how helpful is this new workflow in your day‑to‑day work?
  • What do you like most about it?
  • What frustrates you or slows you down?
  • If we turned it off tomorrow, would you miss it? Why or why not?

Look for patterns:

  • Do people trust the outputs?
  • Are they relying on it for real work, or treating it as a novelty?
  • Are there small changes that would dramatically improve adoption?

A few well-chosen quotes from real users will be more persuasive than a dozen charts.

Step 3: Turn results into a simple business story

Executives and stakeholders don’t want a technical deep dive. They want a clear, grounded answer to: “What did we get for the effort?”

Turn your findings into a short narrative, something like:

  • The problem: “Our team was spending too much time on X, which slowed Y and led to Z.”
  • What we piloted: “Over 90 days, we tested an AI‑assisted workflow that did A, B, and C, with humans still in control of critical decisions.”
  • What changed: “We reduced [time/effort/errors] by approximately X%, handling Y more items per week, and users rated the new workflow Z/5 on usefulness.”
  • What we learned: “It works well for [segment/use case], not yet for [edge cases]. We need to improve [specific issues] before broad rollout.”
  • What we recommend: “Here’s how we’d scale this, and here’s where we’d run a different pilot next.”

Keep it short enough to fit in a single slide or short email, but detailed enough that someone outside the pilot can understand the value.

Step 4: Choose one of three paths

At Day 90, you’re ready to make a call. Most pilots naturally fall into one of three categories:

  1. Scale it:
    • The outcome improved meaningfully.
    • Users like and trust the workflow.
    • Risks are understood and manageable.
    • Next step: Expand to more users, teams, or segments with minor adjustments.
  2. Iterate and extend:
    • The signs are positive, but uneven.
    • Some parts work well; others create friction.
    • Next step: Run another 30–60 day cycle with a narrower focus or refined design.
  3. Retire and redirect:
    • The outcome didn’t move enough to matter.
    • The workflow is too brittle, noisy, or misaligned with how people really work.
    • Next step: capture what you learned and apply it to a different workflow that’s closer to the core of your business.

Killing or pausing a pilot is not failure. The point of a 90‑day framework is to limit downside while you learn what actually works for your organisation.

Step 5: Turn one pilot into a repeatable playbook

The real payoff is not just one successful workflow—it’s the pattern you can reuse.

Document your 90‑day journey in a lightweight way:

  • The charter template you used
  • How you mapped the “before” process
  • How you designed your minimum viable automation
  • What you did in Days 31–60 and 61–90
  • What you’d repeat, and what you’d change next time

This becomes your internal playbook for future pilots. Over time, a series of small, successful experiments—and a few thoughtful failures—add up to a genuine AI and automation strategy, grounded in experience rather than hype.

Wrapping up the series

Across these three parts, we’ve covered:

  • Days 1–30: How to scope your pilot, map the real process, and design a minimum viable automation in plain language.
  • Days 31–60: How to connect real systems, ship a read‑only version, and introduce suggested actions with humans firmly in the loop.
  • Days 61–90: How to turn on limited automation, measure impact, and decide whether to scale, iterate, or move on.

If you’d like help applying this 90‑day framework to your own finance, customer, or operations workflows, this is exactly the kind of work we do with clients—turning abstract “AI strategy” into specific pilots that stand up in the real world.