If you’re involved in any programme which involves training, perhaps a programme to enrich the pedagogical skills of teachers, or a programme to train midwives, you will want to evaluate whether these skills have been attained and are being adequately used by programme beneficiaries.

Assessment is a broad and rigorous area of study, a career choice in its own right, but there are some useful tools to conduct a lighter touch evaluation of skills acquisition. You may want to assess the level of skills acquisition where training is a programme input, and the attainment of the skill is a short term outcome, where the recipients of the training are only an interim beneficiary group. In this case, you might wish to ensure that you can verify that skills were handed over adequately as part of your process evaluation, allowing you make more conclusive claims around programme impact on the ultimate beneficiary group.

By way of example, imagine you are training a group of unemployed young people to deliver some basic learning-through-play exercises. You hope to understand the impact for the children attending the learning through play sessions conducted by the youth once they have been trained. This is a complex project, and if outcomes are not reached, you will want to report whether this was due to implementation issues, perhaps a misalignment of content, or ineffective materials. Having a means of evaluating all the areas of this programme will enable you to pinpoint any areas of weakness for ongoing improvement. Having a way of assessing the training outcomes is a useful tool to enable an incremental approach to achievement of core programme milestones, which together should ensure programme success.

Kirkpatrick’s Four Levels of Training Evaluation is an easy tool, based on Results Chain methodology, which provides a systematic was of evaluating whether training has been well received.

Level 1: Reaction

As the training takes place, it is useful to have a framework to measure the participants reaction. An observation framework or feedback form asking about levels of engagement, and relevance to the training participant can be useful tools. The questions which should be asked at this level is whether or not participants felt the training was useful, interesting and engaging, and whether they felt that they could engage with the content. It is useful to find out how participants feel they might apply the training in the workplace. It is useful to consider not only the content by the length and tone of the responses.

Level 2: Learning

At this level, you will aim to measure whether the skills have been attained. Has the participating acquired the relevant knowledge and skills? Has there been an improvement in their confidence and attitude toward the work where they apply the skills? The most robust means of  measuring this level should include an assessment of skills. To keep this as simple as possible, it is useful to have a simple test which can be conducted before and after the training. The differences should show that skills have been attained.

Level 3: Behavior

This is a critical step in assessing whether the goal of the training has been achieved, and is all about application of skills. Observing changes in behavior which indicate that the skill has not only been acquired, but it being applied is an important step in the training evaluation process. This works something like an on-the-job assessment, and other team members or program implementers may be asked to conduct specific guided observation-based assessments of practice. Where you are working in a complex space, and where the learning has been acquired but is not being applied, it may be useful to interview training recipients, and assess any organizational or systemic constraints which might prevent the behavior change you hope to observe.

Level 4: Results

Finally, you will want to assess whether the outcomes of the training, i.e. the successful reaction, learning, behavior changes allow the programme to reach it’s beneficiary impact. Before embarking on a programme, where training is a critical base step, you will have defined a Theory of Change, or a theoretical framework which should include a clearly specified impact or results statement, as well as the articulation of a pathway of change, or how the training was intended to contribute to the achievement of the overall impact. Level 4 is where this is tested. You will want to conduct a rigorous assessment of impact in the first case, and then, conduct some more qualitive, deep-dive type research into the role that training played in this.

Kirkpatricks’s model makes use of a range of tools and methods for collecting and analysing data at each of the four levels. Having a clear picture of the outcomes the training is expected to yield in the short-, medim- and long-term will assist is informing evaluation at each of the 4 levels.

Share.

About Author

Angela Biden is a consulting strategist and M&E consultant. She has worked across a range of development, and business contexts. She holds a Masters in Economics and Philosophy, and has worked in the nexus of M&E and social impact; to help those doing good do more of it; for some 15 years. From policy board rooms, to Tech start-ups, to grass roots NGOs working in the face of the world’s most abject challenges; Angela is focused on conducting relevant and meaningful M&E: fit for purpose, realistic, and useful for stakeholders creating positive change.

Leave A Reply