Disciplines · 09 · AI

Nobody's a guru. The honest ones say so.

AI is genuinely new territory. The capabilities shift week to week, the vendors rebrand their features faster than anyone can keep track, and the people loudest about having it figured out usually don't. We're not promising you a guru. We're promising you a practitioner who's acutely aware of the space, has done the work, and will tell you the truth about what's real, what isn't, and what to do about the uncertainty in between.

The frame

The ceiling is moving. The foundation shouldn't be.

Every conversation about AI right now is a conversation being had inside uncertainty. Nobody on either side of the table knows exactly what the tools will be capable of in six months. Nobody knows which of today's feature announcements will still exist a year from now, which will have been rebranded, and which will have been quietly replaced by something newer. Anyone claiming otherwise is selling, not thinking.

That uncertainty is the problem, and also the point. If you can't predict the ceiling, you stop trying to — and you spend your energy on the foundation instead. Foundations are boring. They're also the only thing you control. A business with clean data, sorted permissions, and an honest policy posture can pick up whatever AI ships next and get value out of it. A business without those things can't, no matter how many licenses it buys.

This is why the first conversation we have about AI almost never starts with AI. It starts with the data the AI is going to read.

The reflex to break

Most people hear "AI" and picture a chatbot.

The reflex is understandable. The most visible consumer-grade AI experience is a chat window, so the mental model most businesses walk in with is a chat window — deployed into the company, trained on the company's stuff, answering the company's questions. That's one useful thing AI can do. It's nowhere near the most interesting thing.

The shape of useful enterprise AI is rarely a new interface. It's usually an assist embedded inside something that already exists — a ticket queue that classifies itself, a quote that drafts itself, a mailbox where the first reply is already written. The user doesn't learn a new tool. The work just gets lighter. That's the frame we push clients toward when they come in asking about Copilot licenses and chatbots.

The catch is that every shape of useful AI has the same prerequisite: the system has to be allowed to read the right things, denied the wrong things, and trusted with neither until someone's checked. Which is a data governance problem, not an AI problem.

The ceiling is moving. The foundation shouldn't be. The stance on AI
Readiness first

Purview is the easy part. Policy in English is the hard part.

Microsoft Purview is a capable, well-documented toolset. A competent administrator can stand up sensitivity labels, retention policies, and data loss prevention rules in a few weeks. The portal isn't the bottleneck. The bottleneck is upstream of the portal — in the work of deciding, in plain English, what your organization considers confidential, how long you're willing to keep things, who is allowed to see what, and what happens when any of those rules get broken.

That decision work isn't a technology problem. It's a leadership problem. It belongs to the people who understand the business's risk tolerance, its regulatory exposure, its client commitments, and its appetite for trade-offs between speed and control. The audience for a readiness conversation is the C-suite and the data security lead — not the IT admin who will eventually implement the decisions.

That's also why we start every engagement with a questionnaire rather than a workshop. Twenty minutes of structured questions, completed by the business before we meet, gives us both the shape of the gaps and a clean agenda for the working session that follows. Without the questionnaire, we end up in a room waiting for someone to email the IT team and ask whether DLP is turned on. With it, we spend the session on the hard part — turning business intent into policy language that holds up.

Expanding the possibilities

Three shapes AI actually takes once you get past the chatbot.

A handful of representative engagements — the kind of thing we've either done, talked clients through, or helped design. None of them are chatbots. All of them are possible today with tools you likely already own or can buy off the shelf.

Shape A

AI inside an app you already own.

A property management firm runs a tenant portal where maintenance requests come in as free text. The ops team spends hours each week triaging — what's urgent, what's routine, what's a duplicate of last week's ticket. AI classifies the moment a request is submitted, drafts a first response, and flags emergencies. No new tool for the tenant. No new dashboard for the team. The queue just gets quieter.

Shape B

Collapsing a back-office bottleneck.

A specialty manufacturer generates two hundred quotes a month, each one assembled from spec sheets, pricing tables, and boilerplate proposal language. Their sales ops lead spends roughly a day a week on assembly — copy, paste, reformat, version. A Copilot agent grounded in the approved SharePoint content drafts the first version in minutes. The human still reviews and sends. Nothing automated that shouldn't be. The day a week becomes an hour.

Shape C

Internal assistant instead of public chatbot.

A regional insurance broker gets high volumes of inbound client email — much of it variations on the same twenty questions. A public-facing chatbot is off the table: brand risk, compliance exposure, nobody wants to be the firm whose AI told a client the wrong thing about their coverage. Instead, an internal assistant drafts the first reply for a human agent to review and send. Speed improves. Tone stays consistent. The human still owns the message.

Three different shapes, three different problems, one shared prerequisite. Each of these only works if the system can be trusted to see the right data and nothing more. That trust doesn't come from the AI. It comes from the governance underneath.

Nobody knows how AI will change. We know it will. Governance is how you stay ready to react. On operating inside uncertainty
How the work actually runs

A short arc, in an order that works.

A readiness engagement is deliberately compact. We've watched too many businesses commit to twelve-month AI programs that never produce a Copilot license. The point of this work is to get the posture right quickly, then let the business move — not to keep ourselves in the room.

The arc is four steps, and they happen in order for a reason.

One — the readiness questionnaire. Short. Completed async, by the business, before we meet. Twenty minutes of questions that surface where the organization's data lives, who can see it, how long it's kept, and which parts of that are known versus assumed.

Two — the working session. Half day or full day, depending on scope. We start with a tabletop — four realistic scenarios that make the risk visible in the room instead of hypothetical in a briefing. We move from there to the policy work: translating the business's intent into defensible language for classification, retention, access, and acceptable use. The output is policy language the business will actually enforce.

Three — the remediation roadmap. Based on the gaps the questionnaire and tabletop surfaced, a prioritized plan for closing them before any AI tooling is enabled. Quick wins up front, longer work sequenced behind. Realistic dates, not aspirational ones.

Four — phased rollout. With the posture in place, the AI deployment itself is almost anticlimactic. Start with a low-risk group, measure what actually gets used, expand from there. The governance work done in steps one through three is what makes step four uneventful — which is exactly what you want.

The preflight

What we check first, every time.

When a business says "we're ready for Copilot," these are the six things we look for before we agree. None of them are exotic. All of them are the difference between a Copilot rollout that produces value and one that produces an incident.

The preflight

The quiet failure modes behind claimed Copilot readiness

  1. 01

    SharePoint sites shared with "Everyone"

    The most common Copilot incident is the oversharing one — a site somebody set to "Everyone except external users" during a rushed migration two years ago, now surfacing HR content to the whole company. Nobody remembers doing it. Copilot doesn't care.

  2. 02

    Draft and superseded content living indefinitely

    Old proposals. Pricing from eighteen months ago. Roadmaps that got rewritten twice since. Without retention or lifecycle rules, Copilot treats yesterday's drafts as today's facts — and a polished output can quote pricing the business abandoned long ago.

  3. 03

    Sensitivity labels that exist in name only

    Many tenants have labels configured but never applied. Labels only constrain Copilot if the content carries them. A "Confidential" label on a policy document does nothing for the ten thousand unlabelled files sitting in shared drives.

  4. 04

    Departed-employee content still accessible

    Offboarding is the quiet weak link. An account gets disabled; OneDrive lingers; sharing links still resolve. Copilot surfaces the personal notes of someone who left four months ago, and the client-facing project report inherits those notes verbatim.

  5. 05

    No policy on AI-generated content leaving the business

    Who reviews Copilot's output before it gets sent to a client? What's the expectation? If the answer is "the person sending it is responsible" without anything written down, the answer is really "nobody." A one-page acceptable-use policy closes that gap before it becomes a lawsuit.

  6. 06

    An AI strategy that starts with licenses, not posture

    A business whose AI plan begins with "buy five hundred Copilot licenses and see what happens" has not yet considered what Copilot is about to see. Every readiness engagement starts from the opposite end — what are we asking the system to read, and is it ready to be read.

The real deliverable

The artifact is the policy. The outcome is confidence.

Every readiness engagement produces a stack of tangible things. A classification taxonomy written in the business's own language. Retention and DLP policies that reflect actual risk tolerance instead of vendor defaults. A remediation roadmap with owners and dates. A tabletop result the executive team has actually been through. A phased Copilot rollout plan with clear criteria for expansion.

Those artifacts matter. But the deliverable the business actually hired us for is less visible. It's the shift in what you can say yes to, and how fast. Before the engagement, every AI conversation stalls on "we should probably figure out the governance first." After, governance is figured out — and the next AI capability that ships, whatever it is, can be evaluated on its own merits instead of as a governance project wearing a feature costume.

That's the real deliverable. The confidence to leverage AI without inheriting the risks that come with it. A posture that doesn't have to be rebuilt every time Microsoft ships something new. The permission, finally, to move.

Book a call

Before you turn on Copilot, do you know what it's about to see?

If the honest answer is "probably not everything we'd want it to see," that's the conversation to have. A readiness engagement is compact — a questionnaire, a working session, a roadmap, a phased rollout. What it produces is the posture that lets your business move on AI without having to rebuild from scratch every time the capability shifts. Which it will.

Or reach us directly: info@fouronesixit.ca · (647) 371-0400