5 minute read –
Measuring impact is one of the 3 main challenges (needs, transfer, measurement) to demonstrating that leadership development pays off, as identified in How to make leadership development effective . It is possible to measure impact. We provide practical tips on how to improve the measurement of participants’ learning:
- improved smile-sheets
- 3600 test-retest, measuring changes in their behaviour
- the results they create
The ultimate proof of the impact of leadership development programmes lies in their influence on pre-defined tangible results like customer/user satisfaction, efficiency, quality or profitability. This is often thought to be a step too far as there are so many factors that can influence these outcomes.
1. Measure the impact: framework
This problem has been partly solved by the widespread adoption of Kirkpatrick’s framework. Kirkpatrick describes 4 levels of results, each one leading to another, each more important than the last. The first is that the Reaction has to be positive, supporting effective Learning that leads to changes in Behaviour that drive Results.
2. Measure Reaction – level 1
Smile-sheets, with questions like ‘would you recommend the programme to a colleague?’, usually filled in immediately after a session, give data for participant reactions, i.e. level 1. Many practitioners are content with results at this level only. They follow a standard market logic: if their ‘customers’ are satisfied with the product, they will probably recommend it to others and ‘buy’ other products as well.
The main problem with traditional smile-sheets is that they do not predict learning. Two meta-analyses looking at 150 studies have shown there to be no correlation between level 1 Reaction scores and level 2 Learning outcomes.
All is not lost. Thalheimer, a leading researcher in the area, has shown that smile-sheets can be improved with a few well-designed questions focusing on
- The effectiveness of the learning
- Support for the learner
- The learning’s reputation
One key tip is to change the answer options from agree/disagree to statements that relate more closely to the learner’s context:
The data such questions give is still only limited to the learner’s reactions but the learner’s motivation to apply the learning (question 2) does predict actual application, and a self-assessment of the support the learner can expect (question 3) is relevant for whether the learning will have an effect.
3. Measure Learning – level 2
Question 1 (above) is as good a question as possible for testing the participant’s beliefs about what has been learned, but we know that learners are overly optimistic and lack knowledge of learning. It is important to move to level 2, and test remembering and understanding a few days or more after the learning has taken place. Such a test of knowledge and retention is straightforward and easy to apply digitally.
Reacting positively and retaining Learning is good, but not enough. Behaviour has to change if Results are to be created.
4. Measure Behaviour- level 3
The process below is a simple approach to using 3600 test and retest to measure whether the individual leader changes their behaviour as a consequence of participating in a leadership development programme.
The secondary benefits of running such a process, like higher expectations and more support, are equally important as solid measurement. The stakeholders involved in the anchoring step include people the participants see as important. These stakeholders can set expectations that influence participants’ behaviour, increasing their motivation to get whatever they can out of the leadership development programme.
The involvement of more stakeholders to go through the results of the 3600 tests and retests ensures the participants receive the support they need to learn and to apply that learning. In combination, higher expectations and more support increase the effectiveness of the leadership development programme.
The test-retest process gives solid data on whether leaders have developed and changed their behaviour as a result of a development programme. The data and graphs are very helpful in communicating about the programme to stakeholders.
The discussions between a leader and their own boss, colleagues and reports ensure the leader also receives feedback about issues the test does not manage to address. However well-designed, a 3600 test read-out doesn’t capture everything a colleague might want to say to a leader.
5. Measure Results – level 4
Reliably measuring whether changes in behaviour impact tangible results like more efficiency, stronger innovation or happier colleagues is very challenging. In our article How to make leadership development effective we made the point that there are so many factors that influence efficiency, or any other major result, that organisations find it simply too difficult.
That is true. But there is hope.
We have 2 suggestions. The first based on a survey, the second on projects.
5.1 Measuring with qualitative assessment
Asking stakeholders which factors have contributed to a particular Result, and to what extent, gives a qualitatively informed assessment. It isn’t scientific proof, but it is a strong indication from the internal experts the organisation trusts. Asking the same people who were involved in the 3600 test-retest process ensures that they have the insight necessary to judge whether the leadership development programme has contributed significantly.
Each survey has to be designed to fit the specific context. The below is a mock-up to give you an idea:
To make it easier for you to adapt and apply this method in practice we list the principles we have based this mock-up on:
- The Result that participation is expected to contribute to is pre-defined as part of setting a baseline, itself measurable, and linked to a specific time period.
- The goal of the survey is clearly stated
- There can have been more than 1 participants from the defined Organisational Unit in the defined leadership development programme
- List of main factors thought to have an impact on the Result:
- Limited to the 7 most relevant, i.e. the number of information units an individual can hold in their head at any one time, allowing them to rank order the factors in their head as they work through their answers. A longer, collectively exhaustive list doesn’t add value and drowns out the leadership perspective
- Include a factor concerning what the organisational unit cannot control, in this case market dynamics
- Include both Leadership and the hoped-for improvement to leadership through participation in the specific programme
- The respondent must be able to add factors not on the list
- Based on importance, which is easy for the respondent to rate
- All factors listed are expected to be important so lowest score ‘not very important’ rather than ‘not important’
- It is likely that participation in a leadership development programme will not be viewed as very important. It is therefore important to give the respondent an opportunity to distinguish between lower levels of importance. This is done here by using ‘not very important’ vs. ‘some importance’, which are close to each other.
5.2 Measuring the effect of development projects
Some leadership development programmes include real-life projects where the participant can try out new learning and behaviours. These projects can be of considerable value. As an example, in a programme in a national Post Office each leader’s programme-related project had an average value of €20,000. With 25 participants, the programme’s projects for each cohort added €500,000 in total value.
Most of these projects would not have been run without the programme. The project value is not a measure of the impact of the leader’s change in behaviour, but it does give an answer to how to measure the impact of leadership development programmes. Measurable ROI for running a cohort through the programme for the Post Office was:
230% is a very handsome return for the Post Office.
5.2.1 Measuring by making the first 100 days into a project
Another example of applying the project approach can be seen in the first-100-days segment. Approaches in this segment address the challenges and potential a manager faces in a new role.
The first-100-days segment:
- Up to 40% fail and are out of their new role within 18 months, yet there is great potential for the arrival of a new manager to trigger significant performance improvement.
- If a manager succeeds at the start, they set themselves up for good performance for the 4 to5 years they are in the role.
- First-100-days programmes could be the most effective form of leadership development possible.
The digital coach Ella is an example of a product in the first-100-days segment. Ella helps organisations provide the kind of support managers need to make a success of their new roles within 100 days.
Ella encourages users to treat their first 100 days as a project, evaluating their progress and thinking through next steps every week. The key deliverable is Results by Day 70:
The figure describes Best Practice for a manager succeeding in a new role.
The early win delivered by Day 70 does not have as high an average value as the projects run in traditional leadership development programmes, which are driven by established managers over more than 6 months. Early wins do however establish the new manager both in their role and with their team. Together they set the platform for delivering far more value than that produced by a single limited project.
The short-term ROI is easy to calculate. An early win can, for example, have a value of €5,000. Given that Ella costs about the same as a mobile phone, ROI can be calculated to be about 400%.
It is possible, with limited effort, to improve methods for measuring the impact of leadership development. This article has pointed to solid recommendations at 4 levels, i.e. in Reactions, Learning, Behaviour and Results.
If you want to know more about the first-100-days segment please contact us.