Results and Evidence in Monitoring and Evaluation

“Hard evidence, rigorous data, tangible results, value for money—all are tantalizing terms promising clarity for the international development sector.” Rosalind Eyben, Emeritus Fellow at the University of Sussex’s Institute for Development Studies, thus summarizes the central goal of development banks, agencies, organizations, and consulting firms alike: Not only to spur development in the Global South, but to provide proof of these advances.

Eyben suggests that over the past few decades, there has been an increasing necessity for development practitioners to produce hard data, in order to demonstrate that their projects and interventions are achieving their expected outcomes and results. She labels this obligation to provide tangible data the “resultsand evidence framework.” It is difficult to deny that the international development sector depends on statistics, progress reports, impact evaluations, and performance indicators. Particularly in the context of ongoing competition for limited resources, aid recipients, NGOs, think tanks, donor organizations, and private firms must produce hard data that demonstrates their utility and worthiness within the development sphere.

Nowhere else is this framework more applicable than to the field of Monitoring and Evaluation (M&E), which relies on the quantification and measurement of development and humanitarian aid in practice. M&E, necessarily, depends upon producing evidence and data that can be used to evaluate the outcomes of policies and programs. Performance indicators, Logical Frameworks, and tools such as quantitative surveys are often used in M&E to operationalize or measure the outcomes of development programming. Nevertheless, Eyben argues that these artifacts are often politicized, misused, or oversimplified creating challenges for M&E experts. Indicators remain fragmented between countries, regions, and even organizations, resulting in a lack of clear and universal criteria for evaluation. While certain aspects of development may be easy to measure, others (i.e.: human well-being, quality of life, etc.) often evade precise quantification. Furthermore, a narrow focus on numbers may neglect the wider non-quantifiable impacts of a project. Simultaneously, promising interventions or programs that cannot produce reliable data may be passed over for alternatives that areeasily measureable.

By the end of the paper, one realizes that M&E is left in a tough position. On the one hand, evaluation necessitates results and evidence to ensure that development practice is efficient and equitable. On the other hand, we must ask ourselves if current M&E practices are reinforcing an obsession with quantification and hard data. This paradox leaves us with many important questions: What tools and mechanisms are available beyond hard data and measurable results? Is it possible to utilize these mechanisms in an apolitical and appropriate manner? What should be the role of data in the development sector going forward? These are difficult questions, with few straightforward answers.

While Eyben might suggest radical transformation of the data-driven development nexus, further reflection suggests thatquantitative results and evidence should not be eliminated from the development discourse. Indeed, these mechanisms and tools continue to play an essential role within the field. Instead, this article pushes us to think about data critically and ensure that information is being generated in an objective and purposeful manner. As a third-party body, Trust Consultancy and other TPM firms are ideally situated to be the producers and dispersers of data and hard evidence. Trust employs knowledgeable, informed, and invested specialists that value independence and quality above all. There are few institutions within the development nexus that are able to provide autonomy and deliberation in the same manner as TPM firms. Thus, Trust and its employees are in a unique position to be at the forefront of the discussion on quantification, results, and evidence.

Going forward, this will involve a growing awareness of the dangers mentioned above as well as the potentially hegemonic changes in the discourse. Beyond awareness, it is essential that practitioners continue to question methods employed in M&E and critically evaluate the role of fact and data. Eyben does suggest that there is hope in the field, and points to the “development practitioners who are finding room for maneuvering to push back and create the space for alternative framings.” With these words, it becomes apparent that there are alternative ways to think about results and evidence in M&E.This is something that TPM firms like Trust are in a perfect position to do going forward.

Rosalind Eyben’s original article can be found at http://bigpushforward.net/wp-content/uploads/2011/01/Uncovering-the-Politics-of-Evidence-and-Results-by-Rosalind-Eyben.pdf 

Related Posts

Leave a comment

Subscribe to our newsletter and be the first to find out about our programs and training opportunities