Skip to main content
Mission Multipliedby PF TECH

The AI-Ready Non-Profit Back Office · Part 02

AI, Data Sovereignty, and the Modern Non-Profit Tech Stack

The public debate about AI risk focuses on the dramatic — algorithmic bias, autonomous decision-making, systems going rogue. Those are worth discussing. But the risks that are actively costing Canadian non-profits their data, their service continuity, and their operational independence right now are far less dramatic and far more preventable. Here's how to think about data sovereignty in the AI era, and what we're doing about it.

Greg Zatulovsky· Founder & CEO, PF TECH·· 11 min read
A Canadian coastal harbour at dawn — sheltered bay with a rocky bluff, a tall maple sentry catching the first light, a boundary stone at the harbour mouth, open ocean churning beyond.
A Canadian coastal harbour at dawn — sheltered bay with a rocky bluff, a tall maple sentry catching the first light, a boundary stone at the harbour mouth, open ocean churning beyond.

The public debate about AI risk in the non-profit sector tends to focus on the dramatic: algorithmic bias, deepfakes, autonomous decision-making. These are legitimate concerns. But they are not the risks that are actively costing Canadian non-profits their data, their service continuity, or their operational independence right now.

The risks that are costing them these things are far less dramatic and far more preventable. And AI — deployed with the right architecture — is one of the most effective tools available for addressing them.

01·Chapter

What I Have Actually Seen

Specifics, because vague warnings produce only polite acknowledgement.

Chapter 01 of 05

Skip chapter intro

I want to ground this in specifics, because vague warnings about "data risk" tend to produce the same response as vague warnings about anything: polite acknowledgement and no change in behaviour.

I have worked at an organization that had to sue a software vendor to get out of a contract. The vendor had failed to deliver the services they had scoped and documented — the failure was real, the evidence was clear, and litigation was the only exit available because there were still several years on the contract and the vendor fought the termination. The root cause was not the vendor's incompetence. It was the procurement process: an agreement signed without adequate evaluation of the vendor's actual delivery capacity, without proper exit clauses, and without a technical assessment of whether the scoped services were achievable. By the time the failure was undeniable, we were locked in. Getting out cost us time, money, and operational disruption that the organization absorbed on top of everything else it was managing.

I have also volunteered for an organization that experienced a ransomware attack. The data loss was significant. The service disruption was real. Donors could not be contacted. Service users could not be reached. Programs that depended on operational data had to pause. The recovery was expensive and incomplete, because backups that were supposed to exist either did not or had not been tested.

Neither of these is a hypothetical. Both happened inside organizations doing meaningful community work, staffed by capable people who were not thinking about cybersecurity because they were thinking about their mission. That is not a criticism. It is the reality of operating under the overhead myth — when every dollar spent on infrastructure feels like a dollar taken from programs, the first thing that gets cut is the infrastructure that protects everything else.

When every dollar spent on infrastructure feels like a dollar taken from programs, the first thing that gets cut is the infrastructure that protects everything else.

On the overhead myth

Claymation scene of the rebuilt stack: layered, connected cleanly, Canadian maple leaf on the storage unit, yellow de-identification boundary, the Caring Beaver standing back calm.
Claymation scene of a SaaS Frankenstein: mismatched application boxes stitched together with fragile connectors, a Caring Beaver taping it together with data droplets falling through the gaps.
BeforeAfter
From accumulation to intention — the same organization, the same operator.
02·Chapter

The New Threat Surface

In the AI era, both vendor lock-in and cybersecurity risks have expanded — in different ways.

Chapter 02 of 05

Skip chapter intro

In the AI era, both of these risk categories have expanded significantly.

Risk 1 — Vendor lock-in, compounded

The vendor lock-in risk is now compounded by the proliferation of AI tools. Many of them are free at the entry level — and as the sector has learned to say about free products: if it is free, you are the product. The data your team inputs into a freemium AI tool to draft grant proposals, summarize meeting notes, or analyze donor trends is, in many cases, being used to train the model. The privacy policies that govern this are long, frequently updated, and almost never reviewed by the finance director who approved the tool in a budget meeting.

Risk 2 — Cybersecurity, accelerated

The cybersecurity risk is compounded by AI in a different way. AI-powered attacks are faster, more targeted, and more convincing than what came before. Phishing emails that used to be identifiable by their grammar errors are now indistinguishable from legitimate communications. Social engineering that once required a skilled human operator can now be automated at scale. The attack surface has expanded and the attacker capability has increased simultaneously.

The sector's response to both of these trends, in too many organizations, is indecision. They know the risks exist. They do not know how to evaluate them. They freeze. And frozen is not safe — it just means that the decisions are being made by default rather than by design.

Frozen is not safe — it just means that the decisions are being made by default rather than by design.

On the AI freeze

03·Chapter

What Data Sovereignty Actually Means

A governance decision that happens to have technology implications — not the other way around.

Chapter 03 of 05

Skip chapter intro

Data sovereignty is a governance decision that happens to have technology implications — not the other way around.

At its core, data sovereignty means your organization controls where your data resides, who can access it, under what legal framework it is protected, and under what conditions you can leave any given vendor without losing what you need. It means the answers to these questions are documented, understood by leadership, and actually enforceable — not just stated in a vendor's terms of service.

For Canadian non-profits, this has specific regulatory dimensions. PIPEDA establishes baseline privacy requirements for how personal information is handled in the course of commercial activities. Many provinces have their own privacy legislation with stricter requirements. Health data, children's data, and certain social service data carry additional frameworks. The question of whether your donor data, service user records, or volunteer information can legally be processed by a U.S.-based AI model operating under U.S. law is not a question most organizations have formally answered.

The good news — genuinely — is that compliance is achievable. It requires intention, not complexity. And it does not require your organization to become a cybersecurity operation.

Hand-drawn architecture diagram titled The PF TECH Sovereignty Stack: five horizontal layers — Infrastructure (Supabase Canada, Cloudflare, Vercel), Automation (n8n, German jurisdiction), De-identification Layer (reference IDs only), AI Processing (enterprise API, non-retention), Output (human-reviewed) — with a bold yellow rule between Layer 3 and Layer 4 marking the de-identification boundary.
The PF TECH sovereignty stack — every layer chosen for jurisdiction, openness, and exit-path clarity.
04·Chapter

How We Build for Sovereignty at PF TECH

Concrete examples are more useful than principles alone.

Chapter 04 of 05

Skip chapter intro

Every architecture decision we have made in building PF TECH's own infrastructure — and in designing TERN — is anchored to sovereignty principles. I want to make these concrete, because concrete examples are more useful than principles alone.

n8n over Zapier

Open-source automation platform, headquartered in Germany, operating under European data privacy law. Zapier is U.S.-based — jurisdiction matters. We can inspect the code and host it ourselves.

Supabase over Firebase

Canadian data residency for our most sensitive data, on an open-source Postgres foundation. Firebase routes through U.S. infrastructure by default. Export and migrate at any time.

De-identify before AI processing

Only de-identified data passes to any AI model. The AI sees transaction IDs and amounts — never donor names, addresses, or giving histories. This is not a feature — it is the architecture.

Enterprise API only

Every AI provider in our stack has explicit non-retention and non-training agreements for API-accessed data. Never consumer products.

Document and test exit paths

Every tool we use has a documented, tested migration path. We know exactly what leaving looks like before we need it. Lock-in is the single most expensive tech decision you can make.

05·Chapter

What This Looks Like in Practice

Not a technology replacement project — an audit.

Chapter 05 of 05

Skip chapter intro

A practical starting point for any organization is not a technology replacement project. It is an audit.

  1. 1

    Audit

    Inventory every application your team uses. For each: what data it holds, where it lives physically, the vendor's breach history, and what it would take to leave.

  2. 2

    Intentional procurement

    Evaluate every new tool — especially AI tools — against sovereignty criteria. Free tools that process sensitive data deserve the most scrutiny, not the least.

  3. 3

    Minimum viable policy

    A working set of guardrails that lets your organization engage with AI today, responsibly, while the fuller framework develops. Not an 18-month committee output.

  1. Inventory every application your team uses. For each one, answer four questions: What data does it hold? Where is that data physically stored? What is the vendor's data breach history? And what would it take to leave? Most organizations that go through this exercise are surprised by what they find. Sensitive operational data in tools that nobody officially approved. Personal information in cloud applications with no data residency controls. Critical workflows dependent on a single vendor with no documented exit path.

  2. Practice intentional procurement. Before adding any new tool — especially any AI tool — evaluate it against sovereignty criteria. Free tools that process sensitive data deserve the most scrutiny, not the least. The cost of a data breach, a vendor dispute, or a ransomware recovery dwarfs the cost of a paid alternative with better privacy architecture.

  3. Build a Minimum Viable Policy. Not a perfect, comprehensive AI governance framework produced by a committee over eighteen months. A working set of guardrails that lets your organization engage with AI today, responsibly, while the fuller framework develops. Jason Shim of the Canadian Centre for Nonprofit Digital Resilience framed this well at the CPA Ontario Not-for-Profit Conference — the Minimum Viable Policy concept is one of the most useful frameworks I've encountered for getting organizations past the freeze.

Stay sovereign

Monthly notes on building sovereign tech for the sector

What we're building, what the field is teaching us, what's working. Written from inside the work.

By subscribing you join the PF TECH mailing list to receive Mission Multiplied posts and occasional related PF TECH updates. Monthly cadence. Unsubscribe anytime from any email. See our privacy policy at purposeforwardtech.com/privacy for how we handle your data.

Privacy policy

Organizations that engage with AI thoughtfully — with a sovereignty lens, a clear policy, and an understanding of where their data goes — are not less competitive than those who adopt everything uncritically. They are more resilient. And the ones who freeze entirely gain nothing from either approach.

Build your AI governance framework

The AI & Data Governance Advisory is a standing retainer that equips non-profit leadership teams and boards to engage with AI thoughtfully, safely, and in compliance with PIPEDA and provincial privacy frameworks. We work with your board, your leadership team, and a designated internal AI adoption lead to build the frameworks your organization needs — collectively, not for you.

How did this land?

Greg Zatulovsky

About the author

Greg Zatulovsky

Founder & CEO, PF TECH

Greg founded PF TECH to multiply the operational capacity of purpose-driven organizations. CPA with fifteen-plus years in non-profit finance, operations, and technology. Writes from inside the work — practitioner voice, not pitch deck.

More reading

A weathered steel guardrail running along a mountain pass cliff edge at golden-hour dawn, with the words CONTROLS BEFORE CAPABILITY rendered in bold caps across the darkened lower band.
AI automationguardrailsnon-profit finance

Your Accounting System Needs a Bouncer, Not a Butler

Why the automation story needs a second chapter

There's a version of the AI-in-non-profits story being told right now that goes like this: AI will automate the tedious back-office work, free up your staff, and let you focus on your mission. That version is true — I believe it deeply and I'm building the tools to make it real. But there's a part of the story that isn't being told, and the gap between those two versions is where a lot of organizations are going to get hurt.

· 7 min read
A line of wind-bent pines on a rocky shoreline at dawn, shaped by years of storms — with "Resilience is a verb." rendered in serif display over the soft left third.
Non-Profit LeadershipAI GovernanceFinancial Resilience

Resilience is a Verb: What the CPA Ontario Not-for-Profit Conference Revealed

Field Note · CPA Ontario NFP Conference 2025

In November 2025 I hosted the CPA Ontario Not-for-Profit Conference. The theme was Resilient Futures. Seven speakers, eight hours, topics ranging from AI adoption to board governance to front-line service delivery. What I took away was more urgent than the theme: the sector's biggest obstacle to building resilience is not funding or technical skills. It is the calculus that makes staying stuck feel safer than taking the next step.

· 6 min read
A traditional Canadian stone-and-timber watermill at dawn beside a clear running river, mid-restoration — new cedar fitted beside weathered beams, scaffolding still in place, hand tools on a workbench, deep forest rising behind.
PF TECHNon-Profit TechOrigin Story

Building a Better Engine: The PF TECH Origin Story

Origin Story · PF TECH

PF TECH didn't start at a whiteboard. It started in a filing cabinet — in 2013, watching a colleague print every email he received and file the paper copies in rows of cabinets along the wall. Nearly two decades later, that pattern is still the sector's central problem. Here's what I built because of it.

· 12 min read