Monitoring, Evaluation, Impact and Learning
Tuesday, 28 February 2012 15:17
Print
Added Value Surveys

Monitoring, Evaluation, Impact and Learning

Gamos takes a mixed method approach to Monitoring, Evaluation, Impact and Learning.  We believe that MEIL has a mixed purposes and audience, ranging from formative learning to accountability both upwards to donors, and downwards to beneficiaries.  It is inevitable that no one approach can address all the needs of the different audiences.

While we are willing to respond to Terms of Reference for Evaluations, we have often pushed our clients to think beyond the traditional evaluation methodologies.  We have two key principles that work together to inform our work.

Participation – We believe in the value of participatory approaches.  Ultimately evaluations should not be extractive for the donor, but contribute to the programme understanding both to inform and form the work, and also to encourage, to motivate and to spur staff on to new innovative approaches.  We have undertaken the standard participatory approaches at community level and even contributed to a manual on such (link here).  However we also feel that exercises such as self evaluation among programme staff are valid, outcome mapping planning and assessment and focus groups among wider stakeholders.  In this we have been innovative with network mapping to identify the key stakeholders. (Link here)

Robust Quantitative data (analysed for linkages) – Although we believe in the value of self generated qualitative data, we feel that that qualitative data needs validation across a wider sample of people.  We therefore undertake a lot of household surveys.  Our surveys tend to use a mix of internationally agreed indicators and local programme contextual indicators.  This latter set is often generated by focus group discussion – statements made by the focus group are non-parametrically checked across a wider sample of the programme population.   In our household surveys we have been innovative in our analysis.  We are championing Difference in Differences as a means of observing impact and identifying the added value of the programme of work.  We feel that many agencies undertake household surveys only to report the percentages of any variable – in the last 10 years statistics software has enabled a much more nuanced analysis of the links between variables.

By combining the methods that generate participatorily gathered qualitative data with parametric and non parametric analysis of surveys we feel that the development community can make new strides in understanding what works and what doesn’t work in complex programmes. 

We have been exploring the use of the new technologies for surveys, both mobile phones (link here) and Android Tablets (link here)

We are also keen to see that such ME and I studies are used for learning.  Learning is often the forgotten step in the process and so we have explored using video and audio to supplement feedback.  We also have used learning techniques such as learning labs and ORID reflections both of which do not take much time but if used regularly can enhance learning considerable.

While we are keen on robust rigorous data collection, we do not believe in randomised controlled trials, believing that when working with people in poverty raised hopes and expectations that are not fulfilled is unethical.