Over the last few years, I’ve had hundreds of conversations with nonprofit leaders and funders about artificial intelligence. One theme keeps coming up: most funders don’t know how to evaluate AI projects. This isn’t a controversial statement, but it’s one of the biggest blockers standing in the way of impact.
And that matters, because the demand for AI is overwhelming. The Center for Effective Philanthropy’s new report, “AI With Purpose,” finds that while nearly 90 percent of nonprofits are interested in expanding their use of AI, 90 percent of foundations say they are not providing any support for AI implementation. Even more surprisingly, of the 10 percent of foundations who are funding AI projects, for many it may be unintentional — through general operating support, not by deliberately funding an AI initiative. The result: an enormous supply-demand gap.
Frankly, as a nonprofit with AI at the heart of our work, we knew there was a gap — but we didn’t realize it was this massive.
What is driving such a chasm? What we hear over and over is that funders are unsure how to evaluate AI projects. Indeed, one of the top reasons funders themselves noted in the CEP report for not providing support for AI was that they had “not thought about it/wouldn’t know where to start.”
If you don’t know how to tell whether an AI project is safe, high-quality, effective, or feasible, it puts a chilling effect on philanthropic decision makers. The resulting paralysis risks sidelining the very organizations best positioned to harness AI for equity and impact — those who are asking for support and know what they need.
So how should funders evaluate AI projects? Here’s what we’ve learned on the front lines at CareerVillage.org, where we’ve been implementing AI systems for years to help job seekers navigate the changing labor market.
Challenge 1: Funders are nervous about AI risks and don’t know how to assess them.
Solution: AI risk management is tangible and process-based, not abstract.
The “AI With Purpose” report makes clear that nonprofit and foundation leaders share a common set of concerns about AI: data security, misinformation, staff expertise, and bias. These are real concerns. But the way to address them is not through a philosophical framework — it’s through practical processes and people.
The right questions aren’t “Is this project risky?” The right questions are:
- What specific risks are most important in this project?
- How will the nonprofit know if those risks show up?
- Who, specifically, will take action if they do?
We’ve found on the frontlines that detecting risks is even more important than most operators realize, and it’s easy for funders to understand the risk management processes when they are present. It’s not about prevention; it’s about detection and remediation. Funders should look for proactive systems — where staff or automated evaluators can detect a problem before a beneficiary experiences it. Those systems already exist, and they’re feasible for nonprofits to deploy, as we have.
Challenge 2: Funders don’t know how to evaluate AI quality.
Solution: Look for a rigorous “eval” statement.
In AI, quality evaluation isn’t hand-wavy. At CareerVillage, we’ve developed 183 custom evaluation indicators to assess the quality of our AI career coach’s outputs — aligned with our values of trustworthiness, encouragement, and clarity. We’ve tested the reliability of every one of these indicators with the help of certified expert career coaches. We have staff working as “Coach’s coaches” overseeing quality and improvement. Automated quality monitoring using “evals” is a solved problem: the de facto gold standard in the for-product AI sector. This system is the quality assurance backbone of our work — allowing us to identify weaknesses, improve iteratively, and scale with confidence.
Funders should ask: What evaluation methods are you using? Are they automated? Are they tied to accountability systems? A viable AI project plan will have answers to these kinds of questions.
Challenge 3: Funders don’t know how to evaluate AI efficacy.
Solution: Use the same outcome measures you’ve always used, but extend your time horizons.
AI doesn’t change the mission. It changes how the mission gets delivered. At CareerVillage, we still measure career self-efficacy, career adaptability, and career outcomes. The difference is that now AI helps us embed that measurement into the application itself. The metric hasn’t changed; the method has.
Other nonprofits are seeing the same pattern. Take Tarjimly, an AI-powered translation service. They’ve long measured their impact by how quickly refugees and asylum seekers receive translation support. That core metric hasn’t changed — but AI tools now allow them to deliver translations faster and at greater scale. The outcome is the same; the delivery is different.
What does need to change is funders’ tolerance for early-stage results. Early efficacy might be lower than you’d hope — that’s normal for any new experimentation. Funders should extend their time horizons, increase their risk tolerance, and underwrite intentional experimentation. Without that, nonprofits won’t have the room to learn.
Challenge 4: Funders don’t know how to evaluate AI capacity.
Solution: Focus on engineering depth and organizational agility.
It’s true — many nonprofits have a capacity gap when it comes to AI. And it’s understandable that funders aren’t sure which skills matter most. Here’s what we’ve seen:
- For AI-powered products, strong backend engineers are crucial. You don’t need a staff of AI Ph.D.s.
- For AI-enabled operations, what matters is empowering AI-curious staff. With today’s no- and low-code platforms (Salesforce Agentforce, ServiceNow AI Agents, Zapier, Claude Code, Google’s suite of Gemini integrations, and more), anyone can start experimenting.
But capacity isn’t just about technical skill — it’s about agility and adaptability. At CareerVillage, these values are core to our culture. They’ve enabled us to pivot quickly as technology has evolved, embedding new AI capabilities into our work without losing sight of our mission. For example, when new evaluation tools emerged, our team rapidly incorporated them into our systems — strengthening both quality assurance and user trust. Organizations that demonstrate this kind of adaptability are far better positioned to use AI responsibly and effectively.
Funders should underwrite experimentation and upskilling, and they should place high value on organizations that champion agility and adaptability. Those cultural traits are just as important as technical expertise.
To Move Forward, Value Questions Over Expertise
Funders don’t need to become AI experts to evaluate AI projects. But they do need to ask better questions — about risks, quality, outcomes, and capacity. The nonprofit sector is already leaning in. The CEP report makes clear that nonprofits want AI support, but funders aren’t providing it or even hearing that need yet. The Fast Forward and Google.org white paper, “How Philanthropy Can Lead in the Age of AI,” corroborates this finding: 80 percent of nonprofits see potential in generative AI, but nearly half aren’t using it because they lack familiarity and trained staff.
The sector is at an inflection point. If funders remain frozen by uncertainty, they risk not just missing an opportunity but actively holding back progress. Nonprofits are already experimenting, already learning, already building safeguards. They are best positioned to harness AI for equity and impact — if only funders will meet them halfway.
This is why funders need to act now. Waiting until every uncertainty is resolved isn’t prudent — it’s paralyzing. AI is already here. The question isn’t whether funders will encounter AI in their portfolios, but whether they will be prepared to evaluate it wisely. And paralysis, at this moment, risks leaving nonprofits — the ones closest to communities and missions — without the support they need to responsibly shape how AI is used for social good.
Jared Chung is founder and executive director of CareerVillage.org.
👇Follow more 👇
👉 bdphone.com
👉 ultractivation.com
👉 trainingreferral.com
👉 shaplafood.com
👉 bangladeshi.help
👉 www.forexdhaka.com
👉 uncommunication.com
👉 ultra-sim.com
👉 forexdhaka.com
👉 ultrafxfund.com
👉 bdphoneonline.com
👉 dailyadvice.us
