I am in the process of changing the topic of my final project for my Dissemination and Implementation in Health class, so I was digging around, looking for topic ideas. Two of my public health interests in the last six months have been evaluation and D&I, and I found this blog post from Dr. Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the InternationalInitiative for Impact Evaluation, writing for Oxfam. Oxfam has a renewed emphasis on quality evaluation (exciting!) and ensuring that the programs they implement are meaningful uses of time, money, and other resources. Not only that, but they want to make these evaluations public on the internet. There are a lot of organizational culturally-based arguments against this: one of them is that it will cost Oxfam their funding. Another is that it amounts to throwing field managers under the bus. Yet another is that evaluations paint an insufficiently detailed picture of the program and its effects.
Transparency, actual transparency and not just lip service to the ideal, oh, that’s scary stuff. Donors want to know where their money goes. Governments would like to know the same thing.
Dr. Puri outlines a five-point list of things for NGOs to consider when setting up their evaluation system.
Point 1: Have a good theory of change/causal pathway/impact pathway or whatever you want to call it. The name doesn’t matter (it’s a rose!)
Point 2: Put in place monitoring and information systems. Track process and process/output and some outcome indicators across program areas. There should be a list of performance monitoring indicators that speak to different sectors (four in the case of Oxfam).
Point 3: Think about measuring attributable change. Can you for instance:
-Assign the intervention randomly from the beginning without losing sight of your final goal?
-Identify counterfactual sites and start collecting data there? Pros: great reporting to donors; rigorous information; Cons: more expensive than just monitoring data, does require high level of scrutiny in comparison sites especially if you use ex post techniques.
-Use other methods to establish causality? (Which ones?)
Point 4: Undertaking cost and cost effectiveness studies. What are the priced and non-priced inputs in the project? Think about whether you want to use these projects in other places? Scale them up? (And no it’s not going to be calculated from your budget statements alone. )
Point 5: Focus on implementation research. Systematically documenting implementation factors, and putting together a protocol which contains questions that are relevant to informing all stages of the evaluation. This is where participatory methods, focus groups, observational scrutiny, process research should come in, and also inform your theory of change.
As a student who has been learning things like this in my masters program, this is exciting stuff. I love the idea of bridging the implementation gap between practice and research, but I especially love seeing organizations really trying to link policy and research.