The Evidence Revolution in Philanthropy
Over the past 20 years, the philanthropic sector has undergone a significant transformation in how it thinks about evidence. Driven by the influence of effective altruism, behavioral economics, randomized controlled trial research, and the broader evidence-based policy movement, major funders have increasingly shifted their portfolios toward organizations and programs with strong evidence bases — programs where rigorous research has demonstrated that the intervention actually produces the intended change in beneficiaries' lives. This shift has been most pronounced in global health (where PEPFAR, Gavi, and the Global Fund have built systematic evidence requirements into their funding frameworks), in education (where the What Works Clearinghouse and J-PAL's Education Programme have produced extensive evidence on which interventions improve learning outcomes), and in economic development (where rigorous impact evaluations have overturned many assumptions about what works and don't). For non-profits across all sectors, this evidence revolution means that building an evidence base for your program model is increasingly a prerequisite for competitive grant applications, not a luxury for well-resourced organizations.
Levels of Evidence and What Funders Are Looking For
Evidence exists on a spectrum from anecdote to gold-standard randomized controlled trial, and understanding where your program sits on this spectrum — and what would be required to strengthen it — is the foundation of an evidence-building strategy. At the lowest end are program stories and case studies: compelling and humanizing but not suitable for causal claims about impact. Above that are process and output data: showing that activities were implemented and services delivered, but not demonstrating that those services produced the intended changes. Program monitoring data — showing trends in beneficiary outcomes over time — is the next level, but without a control group, it cannot rule out alternative explanations for observed improvements. Quasi-experimental designs that compare outcomes between your program participants and a similar comparison group provide stronger causal evidence. Randomized controlled trials, where eligible beneficiaries are randomly assigned to receive your program or not, provide the strongest causal evidence but are expensive and methodologically demanding. Most funders don't require RCT evidence for every program — but being able to articulate where your evidence sits on this spectrum and what you're doing to strengthen it demonstrates the kind of intellectual seriousness that sophisticated funders value.
Practical Evidence Building Without Research Budgets
Building your organization's evidence base doesn't necessarily require expensive external evaluation studies, though those are valuable when accessible. There are several practical approaches to strengthening your evidence base that are within reach of most organizations. First, collect better baseline and endline data: if you measure beneficiary outcomes before your program begins and after it ends, you have the basic raw material for pre-post comparisons that constitute at minimum a preliminary evidence base. Second, partner with academic researchers: many universities have research centers whose faculty are actively seeking non-profit partners for program evaluations that provide the researcher with a study population and publication opportunities while providing the organization with a rigorous evaluation at minimal direct cost. Third, participate in multi-site evaluations organized by sector networks: pooled evaluations that assess similar program models across multiple implementing organizations produce stronger evidence than single-site studies while distributing the cost and burden across multiple organizations.
Communicating Evidence to Funders Effectively
Having strong evidence is necessary but not sufficient — you also need to communicate it effectively to funders who are reading dozens of proposals that all claim excellent results. The most credible evidence communication is specific, humble, and honest about limitations. Instead of "our program has proven impact," write: "A 2023 quasi-experimental evaluation conducted by the University of Nairobi found that program participants achieved 34% higher reading fluency scores than a matched comparison group after six months of participation (n=450, p<0.01). We are planning a randomized study to test whether these results replicate at scale in new geographies." This framing — naming the evaluation, the methodology, the sample size, the finding, and the current limitations — is dramatically more credible than a vague impact claim and signals to sophisticated funders that you understand what you know and what you're still learning.