Nonprofit AI Governance: How to Build a Policy Without a Six-Month Committee Process

“We need an AI policy” has become one of the most common things I hear from nonprofit leaders in Canada. The sentence that usually follows is: “but we don’t know where to start, and nobody has time to build one properly.”

Both things are true. And they’re the reason most organizations end up with no policy at all — which is the worst outcome.

This article gives you a practical path through both problems. By the end, you’ll have a one-page policy template you can adapt and put in front of your team this week.


Why the Six-Month Process Is the Wrong Frame

The impulse to build a comprehensive AI policy through a proper committee process — with legal review, staff consultation, board approval, and full rollout — is understandable. It’s also the reason most organizations never ship anything.

The problem is that it treats AI policy as a finished product rather than a living practice. No AI policy you write in 2026 will still be fully current in 2027. The tools are changing. The regulations are evolving. Your organization’s use of AI will expand. A policy built for permanence will be outdated before the ink dries.

The organizations with strong AI practices have figured out that the goal isn’t a perfect document. It’s a shared understanding of how AI fits into your work — updated regularly as you learn more.

That reframe changes everything about the process. Instead of: build something comprehensive enough that we don’t have to revisit it — the goal becomes: get something clear and functional in place now, build in a revision schedule, and improve it as we go.


The Two Things Your Policy Actually Needs to Do

Strip away everything non-essential, and an AI policy for a nonprofit needs to accomplish two things:

1. Tell your team what they can do and what they can’t, clearly enough that they can make good decisions without escalating every edge case.

2. Protect the organization — and the people it serves — from the most significant risks that come with unguided AI use.

Everything else is useful but not essential in version one. Staff training, vendor evaluation processes, board-level reporting, annual audits — all good things to add over time. Not required to get started.


The One-Page Template

Below is a template your organization can adapt today. It’s deliberately minimal. Add to it as your situation requires; don’t let the additions stop you from having a version one.


[ORGANIZATION NAME] AI Use Policy
Version 1.0 — Effective [Date] — Next review: [Date + 6 months]

Why this policy exists
Our staff are already using AI tools in their work. This policy exists to help us use them in ways that are consistent with our values, protect the privacy and dignity of the people we serve, and maintain the quality and authenticity of our work.

Approved tools
The following AI tools are approved for organizational use. Staff should use only these tools for work-related AI tasks unless they have explicit approval from [designated role] to use something else.

ToolAccount typeApproved uses
[e.g., Claude Pro][Paid/organizational subscription][e.g., Drafting, research synthesis, administrative tasks]
[e.g., Microsoft Copilot][M365 organizational license][e.g., Email drafting, document summarization, meeting notes]

What should never go into any AI tool
Regardless of which tool you’re using:

  • Full names combined with any other identifying information about clients, program participants, or people we serve
  • Medical, mental health, legal, immigration, or financial information about individuals
  • Social insurance numbers, addresses, or other direct identifiers
  • Confidential organizational information (financial projections, personnel decisions, legal matters, donor information under confidentiality)
  • Information shared with us in confidence that is not ours to share

What can be used freely with approved tools

  • Administrative drafts and internal communications
  • Public-facing content (with human review before publishing)
  • Research synthesis from publicly available sources
  • Meeting agendas, templates, and process documentation
  • General writing assistance for reports, proposals, and communications that contain no personal information

When you’re not sure
If you’re unsure whether a task or the information involved is appropriate for AI use: pause, and contact [designated role/name] before proceeding. When in doubt, default to doing the task without AI.

What happens if something goes wrong
If you realize you’ve entered information that shouldn’t have gone into an AI tool, report it immediately to [designated role]. We will assess whether it requires action under applicable privacy law. There is no penalty for reporting in good faith. There may be consequences for not reporting.

Our commitments as an organization
We will provide training to help staff use approved tools effectively. We will review this policy every six months and update it as needed. We will not require staff to use AI tools they are not comfortable using.

This policy approved by: [Name/Role] on [Date]


Customizing the Template

A few areas where you’ll want to tailor the template to your organization:

The approved tools list is the most important customization. Be specific: name the tool, the account type (consumer/free vs. paid/organizational — this matters), and the approved use cases. Vague guidance (“use AI responsibly”) creates the same vacuum as no guidance.

The prohibited information list should reflect your specific context. A health organization needs explicit reference to health information. A legal aid clinic needs reference to solicitor-client privilege. An immigration services organization needs reference to immigration status. Start with the base list above and add what’s specific to your work.

The designated role should be a real person with real availability, not a committee. If someone isn’t sure what to do, they need an answer quickly — not a meeting scheduled for next week.

The “what goes wrong” section should reflect your actual legal obligations. Organizations subject to Law 25 in Quebec have specific breach notification requirements. Organizations subject to PIPEDA have others. If you’re unsure, this is worth a brief conversation with a lawyer before you finalize.


Getting It Approved Without a Six-Month Process

For most nonprofit organizations, the practical approval path is:

Week 1: ED or senior leader drafts the policy using this template, adapted for your organization. Share with 2–3 frontline staff for a gut-check on whether it reflects the reality of how AI is being used and where the genuine concerns are.

Week 2: Share with the full team via a short meeting or written communication. Frame it explicitly as version one — you’re putting something in place now and you’ll refine it with input from the team over the first six months.

Next board meeting: Bring a ratified version for board awareness and approval. Present it alongside a brief update on how AI is currently being used in the organization, what risks the policy addresses, and what you’ll be monitoring.

This timeline works because you’re not asking anyone to approve a comprehensive framework — you’re asking them to ratify a living document that will improve as the organization learns.


What Comes After Version One

Once you have a functioning version-one policy in place, the next developments worth planning for:

Staff training. Even a clear policy needs to be explained. A 30-minute team session walking through real examples of what’s in and out of scope does more than the written document alone.

A vendor evaluation process. As new AI tools emerge, you’ll want a lightweight process for deciding whether to add them to your approved list. Key criteria: data governance commitments, Canadian compliance posture, organizational account availability, and evidence of usefulness for your specific work.

A Privacy Impact Assessment. For organizations in Quebec handling personal information, a PIA for your AI tools and processes is a Law 25 requirement, not optional. This is worth getting professional guidance on — it’s more specific to your organization than a generic template can address.

Annual review. Calendar it now. The AI landscape in six months will look different enough from today that your policy will need meaningful updates, not just a rubber stamp.


FAQ

Can we copy this template verbatim?
Yes — it’s intended to be adapted. Do fill in the blanks and customize the prohibited information list for your context before distributing it as policy.

Does a nonprofit board need to formally approve an AI policy?
Best practice is yes for formal governance. In the interim, an ED-approved operational policy covers staff guidance while board ratification is pending. Don’t let the board approval process be the reason nothing gets published.

What if staff are already using tools not on our approved list?
Acknowledge it directly in your rollout. A clean-slate approach (“here’s where we’re going, here’s what we’re approving, anything else needs to go through the new process”) works better than treating existing use as a violation. The goal is clarity, not punishment.

Do we need legal review of this policy?
For version one, not necessarily — this template stays at a level of operational guidance that doesn’t require legal expertise to produce. When you add Law 25 Privacy Impact Assessment documentation, data processing agreements with vendors, or breach response procedures, legal review becomes more important.


Mitch Schwartz is the founder of Ops Machine, a Montreal-based AI integration and workflow consultancy. He works with nonprofits and organizations mid-transformation to find where AI fits, build the right systems, and make sure teams actually use them. Book a free discovery call →