Beyond Metrics: Time to Rehumanize Development Evaluations

We need to view development evaluations differently where we focus not just in terms of what we measure, but also why we measure, how and for whom. It’s time we put people—not just data points—at the center.
As I complete 10 years at PCI India this month, I reflect on my journey over the last decade, and I realize what an incredible experience it has been—transitioning from a Results Adviser at a bilateral aid agency to leading the Monitoring, Learning, and Evaluation (MLE) function at an organization that’s primarily known for on-ground implementation and technical assistance to large-scale, government-led flagship initiatives. This shift hasn’t been just professional—it has been deeply personal and transformative.
Fortunately, I landed in an enabling environment. PCI’s work opened up a wealth of opportunities for research, real-time measurement, and periodic assessments. Sure, questions around the independence of our measurement efforts came up now and then, but our commitment to methodological rigor and transparency helped us earn our place in the research and evaluation space. Over time, we grew into a robust, multidisciplinary team—including seasoned professionals with rich experience in both MLE and applied research.
With this capability, we actively started pursuing research and evaluation opportunities. As we combed through RFPs day in and day out, one trend stood out: a steady demand for small-scale evaluations, rather assessments or quick feedback on program performance. Many of these evaluations didn’t quite reflect the shifting nature of development programming.
While the sector increasingly embraces human-centered design for implementation, it still relies on rigid, conventional frameworks for evaluations. In this blog I argue that we need to re-humanize development evaluations—not just in terms of what we measure, but also why we measure, how and for whom. It’s time we put people—not just data points—at the center.
Moving Beyond Numbers: Seeing the People Behind the Data
Traditional evaluation methods often distill change into graphs and figures. But what about the real change? It’s messier and more human. It’s about people’s lives, dreams, struggles, and resilience. Humanizing evaluation means looking beyond metrics to truly understand lived experiences. Every data point represents a person, a family, or a community striving for something better.
There have been attempts to address the issue—realist evaluations, for instance—but they remain scattered and haven’t gained the traction they deserve. This is partly due to lack of readily available expertise and patience in our field. In this time and age when readers’ retention is shrinking by the day, in the age of tweets and two-minute reels, evaluation methods that require depth, time, and reflection often get sidelined. For example, in the last one year, I found only a couple of organizations expressing a strong desire to use methods like Most Significant Change or Outcome Harvesting. It was encouraging, but they are aberrations and not a usual pattern.
Often, we chase objectivity and gravitate toward numbers because they seem more indisputable than stories. But numbers without context can be just as misleading. Mixed methods could bridge this divide—but too often, the mixing feels half-hearted. In no way, this is a tussle between quantitative and qualitative methods. It’s about asking: Are we centering human stories and needs in our evaluations? Who is defining success? And on whose terms?
Centering Local Voices and Participation
Perhaps the most meaningful shift we can make is toward locally led, participatory evaluations. When communities help shape and interpret evaluations—through tools like community scorecards or indigenous knowledge frameworks—the process becomes far more than a means of accountability. It becomes a tool for empowerment and co-learning.
We have tried this at PCI India. We have involved communities in evaluations, even in routine data collections—and while the experience has been mixed, it has also been deeply instructive. It reinforced a core truth: for participatory evaluation to work, one needs to invest more—in people, time, and capacity building. There are no shortcuts. Also, one needs to embrace initial hiccups as a steppingstone. It is like learning to ride a bicycle or swim: early failures say little about long-term outcomes. So, let’s not dismiss participatory approaches just because early attempts are not highly successful. We need to keep at it, with patience and humility.
Prioritizing Equity and Justice
To humanize evaluation is to ask hard, sometimes uncomfortable, questions: Who benefits from development? Who is left out?
Evaluations must be gender-transformative and intersectional. We must make space for the voices of those at the margins—whether based on gender, caste, class, disability, or identity. Evaluators must be willing to redefine success through a justice-oriented lens. I often think and tend to believe: if a solution works for the most marginalized, it is likely to work for everyone. So why don’t we start there? If resources are limited, let’s prioritize testing with those who are most in need.
And let’s not forget the long-term effect. Evaluations often suffer from recency bias, focusing on the immediate. But climate change is forcing us to think intergenerationally. We must ask how today’s interventions will impact not just today’s communities, but tomorrow’s planet as well.
Ethics, Data Sovereignty, and Trust
With more data at our fingertips, our responsibility has grown. Humanized evaluation demands that we uphold ethical standards, secure informed consent, and respect community rights—especially when it comes to personal data. Thankfully, I see positive changes here. Most researchers I meet take ethics seriously—not just to meet the IRB requirements, but out of a genuine respect for the communities they engage with. It’s quite encouraging!
If the Government of India’s move to strengthen data protection through the new Digital Personal Data Protection Act is implemented well, it could play a crucial role in ensuring communities retain agency and ownership over their data—a core principle of humanized evaluation.
Conclusion
A truly humanized approach to development evaluation is rooted in empathy, participation, and local wisdom. It values people over numbers ,equity over efficiency, and learning over mere accountability . As evaluators, let’s not just measure change—let’s understand it. Let’s co-create it. And above all, let’s ensure that the people whose lives we seek to improve remain at the very heart of our work.
Author: Dr Sudipta Mondal, Senior Director – Research and Organizational Effectiveness, PCI India