80% of Nonprofits Are Already Using AI. Only 10% Have a Policy. Here's What That Means for Your Organization.

Here are three numbers worth sitting with:

80% of nonprofit workers are already using AI tools in some form.
10% of nonprofit organizations have a formal AI policy.
64% have no plans to develop one.

These figures come from research by Imagine Canada and Cinder, and they describe the current reality of AI adoption in the Canadian nonprofit sector with uncomfortable precision.

The gap between 80% and 10% is where risk accumulates — quietly, without drama, until something goes wrong.


What’s Actually Happening in That Gap

When staff use AI tools without organizational guidance, they make their own rules. And individual rules, made under time pressure by people trying to get their work done, are unpredictable.

Some staff are cautious to a fault — avoiding AI entirely for fear of doing something wrong, even for tasks where it would be perfectly appropriate. Others are using ChatGPT to draft summaries of client case notes on a free account, without any sense that this might be a problem. Most are somewhere in between, navigating genuinely uncertain territory without a map.

This isn’t a criticism of staff. It’s a description of what happens when organizations don’t provide guidance. People fill the vacuum with their best judgment, and best judgment varies.

The practical consequences of this gap show up in a few ways:

Privacy exposure. Client data — even de-identified fragments — ends up in consumer-grade AI tools that weren’t designed for organizational use. This creates real risk under PIPEDA and, for Quebec organizations, under Law 25.

Inconsistent quality. AI-assisted work varies wildly in quality when there’s no shared standard for how to use it, when to use it, and how to review what it produces. The same organization can produce excellent AI-assisted communications and dangerously inaccurate AI-assisted research in the same week.

Liability without awareness. Most organizations in the gap genuinely don’t know what exposure they’ve created. When they find out — usually because something went wrong — the cleanup is harder than the prevention would have been.

Staff anxiety. Workers who aren’t sure whether they’re allowed to use a tool, or what the rules are, are working with unnecessary friction. Some become reluctant to use AI at all. Others feel guilty about using it but do anyway. Neither state is good for anyone.


Why Most Organizations Are Stuck

The 64% with no plans to develop a policy aren’t all negligent. Most are stuck for understandable reasons.

Capacity. The people who would need to draft an AI policy are already stretched thin doing everything else. A committee process can take months that nobody has.

Uncertainty. AI is moving fast. Many organizations are waiting for things to settle before they invest in a framework that might be obsolete in a year.

Scope creep. “We need an AI policy” can quickly become “we need to understand every possible AI use case, consult with legal, survey our staff, do a risk assessment, develop training materials, and get board approval.” That project never starts because it’s too big.

False comfort. “We haven’t had a problem yet” is the most dangerous form of inaction. Problems in this area tend to be invisible right up until they’re visible.


What the 10% Are Doing Differently

Organizations with functioning AI policies didn’t mostly get there through elaborate committee processes. They got there by deciding that imperfect and in-place was better than perfect and pending.

The distinguishing factor is usually one person — an ED, a director of programs, a communications lead — who decided to treat this as a solvable problem rather than a waiting game. They wrote something, got it in front of their team, and committed to revising it as they learned more.

The organizations doing this well share a few practices:

They involve frontline staff in the process. The people closest to the work have the clearest view of where AI is already being used and where the genuine risks are. A policy written without their input tends to miss the actual picture.

They keep it short. A one-page AI use policy that staff actually read and understand does more work than a 20-page document that lives in a shared drive and gets ignored.

They build in a revision cadence. Rather than trying to get it perfect, they commit to reviewing it every six months. This takes the pressure off getting everything right the first time.

They treat it as an organizational conversation, not a legal compliance exercise. The goal is shared understanding, not protection from liability. Organizations that orient around the former tend to build stronger actual practices.


The Minimum Viable Policy

If your organization has no AI policy and you want one by end of week, here’s the minimum viable version. It’s not perfect. It’s a foundation you can build on.

Our AI Policy (Draft — [Date])

Why this exists: Our staff are already using AI tools. This policy exists to help us use them well — protecting the people we serve, maintaining the quality of our work, and staying consistent with our values.

Approved tools: [List 1–3 tools your team is actually using, with the tier/account type — e.g., “Claude Pro,” “Microsoft Copilot via our M365 subscription”]

What never goes into any AI tool:

  • Full names combined with any other identifying information about the people we serve
  • Medical, mental health, legal, or immigration status information about clients
  • Social insurance numbers, addresses, or other direct identifiers
  • Information shared with us in confidence that is not ours to share further

What can be used freely:

  • Administrative drafts, internal communications, meeting agendas
  • Research and synthesis from public sources
  • First drafts of public-facing content (with human review before publishing)
  • Templates, frameworks, and process documentation

When you’re not sure: Ask [name/role]. If they’re not available, default to not using AI for that task until you can check.

We’ll review this policy: [Date — aim for 6 months from now]

That’s it. A policy this simple, actually distributed and discussed with your team, does more to reduce risk than a comprehensive framework that nobody reads.


Why This Matters Beyond Risk Reduction

There’s a case for getting ahead of this that goes beyond avoiding problems.

Organizations that develop clear, thoughtful AI practices now are building a genuine capability advantage. They’re creating the conditions for their staff to use AI effectively and confidently — which means getting more out of the technology and directing more human energy toward the work that actually requires human judgment.

The organizations treating AI as something to be managed thoughtfully, rather than something to be avoided or to proliferate unchecked, are the ones building durable capacity. That compounds over time in ways that organizations waiting on the sidelines won’t easily catch up to.

The 10% with policies aren’t ahead because they’re bigger or better-resourced. They’re ahead because someone decided to start.


FAQ

Does our board need to approve an AI policy?
Best practice is yes — AI policy touches on risk management, privacy, and organizational values, all of which are board-level concerns. But waiting indefinitely for a board meeting shouldn’t stop you from establishing interim operational guidance for staff. A two-stage approach (interim staff guidance now, board ratification at your next scheduled meeting) is reasonable.

What if staff are using AI tools we haven’t approved?
Acknowledge it directly rather than pretending it isn’t happening. The goal of a policy isn’t to punish — it’s to create clarity. An amnesty-style rollout (“here’s what we’re doing going forward”) works better than a crackdown.

Do we need a lawyer to write this?
For a minimum viable policy, no. For a comprehensive framework that includes data processing agreements, Law 25 compliance documentation, and risk assessment protocols, legal review is advisable. Start with the minimum viable version and add rigor as your capacity allows.

How do we know if our policy is working?
Ask your team six months in: are there situations where you weren’t sure what to do and the policy helped? Are there situations where it didn’t help? Use those answers to revise. A living document that improves over time is the goal.


Mitch Schwartz is the founder of Ops Machine, a Montreal-based AI integration and workflow consultancy. He works with nonprofits and organizations mid-transformation to find where AI fits, build the right systems, and make sure teams actually use them. Book a free discovery call →