Case studies around the world have begun to reveal effective methods for evaluating projects which aim to improve governance and accountability, and this field continues to grow. There remain key challenges, such as high cultural differentiation, and context specific impact targets or improved governance, leading to no universally accepted global standard for what the results of such projects should be, although many have tried!

Yet alongside this significant heterogeneity, egap.org is an organisation which has shown significant success in the use of field experiments (such as randomised control trials) to rigorously evaluate issues around governance. Just like any other type of M&E, well specified goals and measurement frameworks, and choosing the right research methods is key to conducting sound M&E.

Monitoring and Evaluation in the public sector, where government officials are recipients of programmes to improve efficacy in service delivery is a significant and complex field of M&E. Frequently, system failure happens between different spheres of government, or between government departments, and entering into this space with developmental solutions requires a particular and in-depth understanding of the history of these institutions. Where multiple stakeholders are involved it is important to understand how to best allocate roles and responsibilities which are in line with workplace requirements, legislative frameworks, but which still create the space for improved practice and collaboration.

However, this is not a topic applied only to large public sector projects. These methods and approaches can equally be applicable to non-governmental organisations requiring capability building and support.

The question remains: What are the best methods for evaluating governance programmes?

Better governance – ‘for what?’

One of the key challenges in this complex space is to answer the question around what the programme aims to achieve. In the case of designing randomised field experiments, this begins with a good research question. And if you’re doing quasi-experimental design work, then consider using the language and methodology of results to frame what the result should be when the project is complete. Out of the research question, or results specification will come the research design, the development of instruments for data collection, or M&E implementation activities.

What are Governance Indicators, and how should I be using them?

Much work has been conducted globally to define good governance indicators, however the use of these is still hotly contested. This is partly because of the difficulty in defining what ‘good governance’ looks like, and partly due to the growing realisation that “measuring governance is itself a political process“. While the UNDP presents a potential set of governance indicators, this article, presents a series of potential pitfalls with their use. As you select your indicators, a key challenge is to be clear on your programming, delving into how changes in the project are expected to yield higher-level impact. The Result Chain methodology can be a particularly useful tool, and the templates can be workshopped with programme beneficiaries and participants. This helps to articulate the higher-level outcome as ‘Results’, and to get participants to critically reflect on how these can be achieved. The key in linking programme activities to concrete outcomes. The rest is process monitoring to ensure implementation of governance and capability-building activities are taking place as planned. In order to ensure lasting effects, be sure to consider upward and downward accountability, include stakeholders in the results framing, and use the opportunity to strengthen the potential for institutions as learning bodies.

Be aware of complexity

Governance programmes frequently involve several, and at times conflicting objectives, and often with budget constraints. Taking this into account at every level of programme design can help to identify some of the key bottlenecks which might be causing the problem in the first place. Including a comprehensive stakeholder and institutional mapping exercise in the planning phase is key to success.

Share.

About Author

Angela Biden is a consulting strategist and M&E consultant. She has worked across a range of development, and business contexts. She holds a Masters in Economics and Philosophy, and has worked in the nexus of M&E and social impact; to help those doing good do more of it; for some 15 years. From policy board rooms, to Tech start-ups, to grass roots NGOs working in the face of the world’s most abject challenges; Angela is focused on conducting relevant and meaningful M&E: fit for purpose, realistic, and useful for stakeholders creating positive change.

Leave A Reply