How to Use AI Without Feeding It Confidential Data: A Practical Guide for Nonprofits
The most common question I get from nonprofit workers who want to use AI isn’t “which tool should I use?” It’s this:
“How do I actually use AI to help with my work without giving it confidential information about the people we serve?”
It’s the right question. And the honest answer is: it’s solvable, but it requires knowing what’s actually happening to your data — and most people don’t.
Here’s what you need to know, and what to do about it.
What Actually Happens to Your Data
When you type something into ChatGPT, Claude, or another AI tool, that text goes to a server — usually in the United States — where it’s processed by the AI model. What happens next depends on a few factors most users never check.
Free and consumer tiers of most AI tools have historically used conversation data to train future versions of their models. Policies vary and change, but the default assumption for free tools should be: your input may be used to improve the product.
Paid and enterprise tiers typically offer stronger data protections. OpenAI’s business plans, Anthropic’s Claude for Teams, and Microsoft Copilot for enterprise all offer contractual commitments that your data won’t be used for model training, and some provide data processing agreements that address Canadian privacy law requirements.
The practical upshot for nonprofits: If you’re using a free ChatGPT account, Google Gemini without a Workspace subscription, or a similar consumer-grade tool, you should assume the data you enter is not private in any meaningful organizational sense. If you’re using a paid plan from a reputable provider, the picture is significantly better — but it still doesn’t mean you should enter raw client data freely.
The Canadian Layer
Canadian organizations — particularly in Quebec — have additional obligations that US-based guidance doesn’t cover.
PIPEDA (the federal private sector privacy law) requires organizations to protect personal information under their control and to obtain meaningful consent before collecting or using it. When client data goes into a US-based AI tool, the question of who controls that data becomes genuinely complicated.
Law 25 in Quebec goes further. It requires Privacy Impact Assessments for new technologies that handle personal information, creates obligations to identify a person responsible for personal information in your organization, and in some cases requires notification when automated systems make decisions affecting individuals. If your organization operates in Quebec and you’re using AI tools with client data — even on a paid plan — a PIA is not optional.
This isn’t meant to alarm you. It’s meant to be accurate. Most organizations can navigate this with some thoughtful setup. But “we’re using a paid plan” is not, by itself, a compliance position.
The De-identification Method
The most practical approach for the majority of nonprofit AI use cases is de-identification: removing or replacing identifying information before it goes into any AI tool.
This is simpler than it sounds for most tasks. Here’s how it works in practice:
For case notes and client documentation: Before using AI to help summarize, synthesize, or draft from case notes, replace names with codes (“Client A,” “Client B”) or generic descriptors (“a 34-year-old woman experiencing housing instability”). Remove specific dates, locations, and any other details that could identify an individual. The AI gets enough context to help you; the client’s identifying information stays out of the system.
For meeting summaries: If you’re summarizing a meeting that involved clients or discussed client cases, strip or replace names before pasting the transcript. The substance of what was discussed — program decisions, service planning, resource referrals — doesn’t usually require identifying details for the AI to help you write a useful summary.
For grant applications and reports: Aggregate data and anonymized case examples are almost always sufficient for AI-assisted writing. “We served 47 individuals experiencing homelessness in Q3” is both more privacy-respecting and, typically, exactly what a grant report needs.
For internal documents and administrative tasks: Email drafts, meeting agendas, policy documents, HR communications — most of this contains no client personal information at all. This is where AI can help most freely, and where most organizations should start.
A Simple Framework: Three Categories of Tasks
Rather than evaluating every task individually, it helps to categorize your work into three buckets:
AI-safe with any tool:
Tasks involving no personal information. Draft communications, research summaries, general writing, administrative templates, meeting agendas, internal training materials, public-facing content. Use whatever approved tool you have without hesitation.
AI-safe with de-identification:
Tasks involving client situations, case patterns, or program data — but where identifying details can be removed without losing the substance. Case notes, program reports, grant narrative writing, service planning frameworks. Apply de-identification before using AI.
Human-only (for now):
Tasks where the identifying information is inseparable from the task, where the stakes of error are high, or where the use of AI would undermine the trust relationship with the person being served. Decisions about individuals, therapeutic or clinical work, intake assessments for vulnerable populations, anything where the person has a reasonable expectation of private, human-only handling.
This isn’t permanent. The “human-only” category will shrink as tools improve, as organizational policies mature, and as sector-specific guidance develops. But starting with clear categories is better than trying to evaluate every edge case from scratch every time.
The Paid Plan Question
For organizations that have or are considering paid AI subscriptions, here’s what to look for in terms of data protection:
- No training on your data: The provider should commit, in writing, not to use your conversations to train their models.
- Data processing agreement: For organizations subject to PIPEDA or Law 25, a DPA that addresses Canadian requirements is important. Ask specifically — not all providers offer this automatically.
- Data residency options: Some tools offer Canadian or European data hosting. This isn’t always available, but it’s worth asking about, especially for healthcare or social service organizations.
- Role-based access controls: For team accounts, you want to ensure staff only access what they need.
Microsoft Copilot (via a Microsoft 365 subscription) offers the most mature enterprise data governance story for Canadian organizations, largely because Microsoft has deep compliance infrastructure for regulated industries. It’s not the only option, but it’s the most commonly available starting point for organizations already in the Microsoft ecosystem.
What to Tell Your Team
The organizational dimension matters as much as the technical one. Individual staff making ad hoc decisions about what’s safe to put into AI tools is a recipe for inconsistency and risk.
A simple internal AI policy — one page, plain language — should answer three questions for your team:
- Which tools are approved for organizational use?
- What information should never go into an AI tool, under any circumstances?
- What’s the process when someone isn’t sure?
The third question is the most important. Staff who have a clear escalation path when they’re uncertain are far less likely to make a bad call under pressure. Staff who have no guidance make up their own rules — and those rules are unpredictable.
FAQ
Is it ever okay to use AI with real client names?
With an appropriate paid plan, a signed data processing agreement, and a completed Privacy Impact Assessment (where required by Law 25), there may be use cases where this is acceptable. It requires deliberate organizational decision-making, not individual discretion.
What if a staff member has already entered client data into a free AI tool?
Don’t panic. Assess what was entered, determine whether it constitutes a reportable breach under your applicable privacy legislation, and update your policies and training to prevent recurrence. This is more common than most organizations want to admit.
Can AI transcription tools (like Otter.ai or Fireflies) be used in client meetings?
Only with informed consent from all participants. Clients have the right to know if their conversation is being transcribed and processed by an AI system. In many cases, this means asking explicitly before the meeting and allowing people to opt out.
Is there a Canadian AI tool we should use instead?
For most use cases, the major tools (Claude, ChatGPT, Copilot) with appropriate paid plans are the practical answer — fully Canadian-hosted alternatives with equivalent capability don’t yet exist at scale. The governance layer matters more than the geography of the tool.
Mitch Schwartz is the founder of Ops Machine, a Montreal-based AI integration and workflow consultancy. He works with nonprofits and organizations mid-transformation to find where AI fits, build the right systems, and make sure teams actually use them. Book a free discovery call →