Types of Monitoring and Evaluation

There are many types different types of M&E, and choosing the right one for your project is critical. The type of M&E you choose is related to the type of project you are undertaking and the type of context you are working in. This article outlines some if the different types of M&E and when and why you might choose each one.

Types of M&E are considered against the project goals, but also against the goals of the M&E process itself. We begin by considering the different types of M&E in terms of what the process of monitoring and evaluation hopes to achieve.

Process M&E

If you are conducting M&E for the primary purpose of ensuring efficacy of implementation, then you will focus on process M&E. Process M&E seeks to establish that the activities of the programme are conducted in accordance with the project plan, the commitment to conducting a certain number of activities, according to certain specifications of quality or intensity. All M&E exercises should monitor programme activities to reflect the basic activity- output indicators, even where a project is not seeking to merely evaluate efficacy against the implementation plan alone. Process M&E is the ticket to outlining precisely what was done, which beneficiaries attended programme activities, important information for any M&E practitioner aiming to determine what basic actions or inputs are required to catalyze change.

Outcome or Impact Oriented M&E

If, on the other hand, your work seeks to explore whether outcomes and impacts where achieved, then you will be conducting more outcome or impact oriented M&E. Although impact evaluation and outcome evaluation are defined differently depending on the context or focus are of programmes, these can be loosely grouped into a mode of doing M&E which looks beyond the activities, and toward change at the broader level. These types of evaluation as whether the change intended what achieved, to what extent and why. While outcomes evaluation might ask whether intended outcomes were achieved, impact evaluation will explore how well these were achieved in terms of the lasting, measurable or observable impacts. An important consideration in this is the significance of monitoring, and explanations where outcomes were not achieved; or of unintended outcomes which might have arisen and their impact on the project.

A Summative Evaluation usually refers to an external evaluation conducted at project closure which seeks to more rigorously explore whether both process, and outcome or impact targets were achieved, but may not necessarily include the detail on if not, why not.

Then, looking into another way of exploring M&E types, we explore the detail of the social science and research methodologies underlying all M&E practice. This has a lot to do with the roots of social science which is inclined to consider the complexity that comes with open systems, many participants and unexpected outcomes.

It is helpful, when deciding which type of M&E to apply, to consider the degree of complexity of your project. Are you working in a context where a lot of the outcomes to expect to see are within direct control of your programme activities? For example, a vaccination programme, or a programme involving a health assessment and then the provision of a device? Some programmes are relatively direct, but showing attribution of impact to the work you have done is somewhat more complex because it is not entirely within your sphere of influence. For example, if you are working on a literacy programme with a random sample of children, and you can take attendance registers at your activities, and assess literacy attainment, but you do not have a control group, and the children you see may or may not attend normal school and/or other literacy programmes running in the area. Perhaps you’re working is a project where what you’re ultimately assessing in the well-being or mental health of your beneficiaries, or working on a prevention programme, such as reducing Gender Based Violence, where measuring your impact may take time and may only be possible at the community level.

These examples all differ in their complexity and this needs to be considered as you determine what is realistically measurable. There is no clear spectrum where a specific method of M&E can be applied to a specific project type, and sometimes it works best to combine methods; perhaps having an overall case study-based M&E system, punctuated with highly quantitative assessments for certain programme aspects. This section explores some of the leading types of M&E which might set you on the right course in determining and design the ideal M&E solution for your project.

Developmental Evaluation

Developmental Evaluation is useful to fully embrace an M&E system which is iterative, and critically reflects a changing organization or programme over time. This type of evaluation is particularly useful for very complex projects, where the organization itself is run by members of the beneficiary community and it aims to create broader change. This also works well where the programme is new in its design, and thus is still working deeply in the learning and discovery space. Design-based research can be a useful plug-in to developmental evaluation to keep some key research questions in place, as the organization and its work changes and grows over time.

Realist Evaluation

This is a broader term for a set of evaluation approaches for complex systems. Some would conder developmental evaluation above, as falling within this broad group of methods. The key is that Realist Evaluation is theory driven, and according to it’s founders, aims to define which might work for different groups and why this might work in some contexts and not others. It does not solely seek to measure whether something works or not. This being reasoning and decision more fully into a consideration of programme implementation and success, which makes sense, especially in programmes which seek to create and understand behavioral change in beneficiary groups. If things like programme participants developing intrinsic motivation to continue the actions, behaviors of skills handed over by the programme, or if your main source of information is surveying changes in mindsets, then this might be the best overarching approach for you. Remember, using an overarching realist evaluation approach can be combined with outcomes approaches for a really compelling story of impact.

Randomized Controlled Trials

A Randomized Controlled Trial is considered by some to be the closest M&E can get to ‘science’. This method compares a randomly selected sample against a treatment group in order to try to ascertain the impact of the project. This is highly quantitative, and relies on some level of objective assessment in order to work. If your programme works within a relatively defined set of parameters, you have access to a suitable control group, and you have strong standardized assessment methods (ideally scored quantitatively, or with a pass/fail criterion), this this might be the best approach for you.

Social Return on Investment

Social Return on Investment (SROI) methodology is a highly participatory approach which seeks to evaluate a range of economic and social benefits in financial terms to be compared with project costs. The method aims to define the financial value of the gain for each dollar spent. This is a really useful method for those looking to find a simple articulation, a ratio, to justify a social investment, but take caution as finding financial proxies for truly intangible change can be as challenging as identifying indicators to measure. Fortunately there is a growing base of resources for this such as the Global Happiness Index, and the Financial Accounts of Well Being.

Outcomes Focused M&E

This may include methods which are inductive or deductive. Inductive methods will be used to develop a theory, say while you are researching your Theory of Change based on existing empirical research. Then you may implement your project and use deductive methods, once you have values against your key indicators for whether you achieved the outcomes, and alongside this, whether the causation, the pathway of change aligned to your Theory of Change. Using this combination is frequently terms ‘Outcomes Mapping’ which aims to evaluate for success against a theoretical plan.

On the other hand, you may use a more abductive approach, wherein you implement the programme, then assess the change, and use your evidence (information collected against your indicators) to find the simplest possible explanation for how and why the change took place, and If the programme ‘worked’.

Outcomes methods are most prevalent, and place the most importance in being able to actually measure success, which usually starts with defining what this looks like. Results-based evaluation approaches are a great place to start in assessing whether a programme worked, and then it is important to consider how to frame the ‘how and why’.

Did you find this article useful? Support our work and download all templates.

About Angela Biden

Angela Biden is a consulting strategist and M&E consultant. She has worked across a range of development, and business contexts. She holds a Masters in Economics and Philosophy, and has worked in the nexus of M&E and social impact; to help those doing good do more of it; for some 15 years. From policy board rooms, to Tech start-ups, to grass roots NGOs working in the face of the world’s most abject challenges; Angela is focused on conducting relevant and meaningful M&E: fit for purpose, realistic, and useful for stakeholders creating positive change.
Support our work ♡Download all templates
+
×