How to measure the impact of leadership development programmes

How to measure the impact of leadership development programmes

Richard Taylor C. Psychol., MBA
01. Nov 2020 | 13 min read

How to measure the impact of leadership development programmes

Measuring impact is one of the 3 main challenges (needs, transfer, measurement) to demonstrating that leadership development pays off identified in the article How to make leadership development effective It is possible to measure impact. We provide practical tips how to improve the measurement of participants’ learning:

  • improved smile-sheets
  • 3600 test-retest, measuring changes in their behaviour 
  • the results they create

The ultimate proof of the impact of leadership development programmes lies in their influence on pre-defined tangible results like customer/user satisfaction, efficiency, quality or profitability. This is often thought to be a step too far as there are so many factors that can influence these outcomes.

 

Measure the impact: framework

This problem has been partly solved by the widespread adoption of Kirkpatrick’s framework. Kirkpatrick describes 4 levels of results, each one leading to another, each more important than the last. The first is that the Reaction has to be positive, supporting effective Learning that leads to changes in Behaviour that drive Results.

KIRKPA~1

 

Measure Reaction - level 1

Smile-sheets, with questions like ‘would you recommend the programme to a colleague?’, usually filled in immediately after a session, give data for participant reactions, i.e. level 1. Many practitioners are content with results at this level. They follow a standard market logic: they know if their customers are satisfied with the product and will therefore probably recommend it to others and buy other products as well.

The main problem with traditional smile-sheets is that they do not predict learning. Two meta-analyses looking at 150 studies have shown there to be no correlation between level 1 Reaction scores and level 2 Learning outcomes.

All is not lost. Thalheimer, a leading researcher in the area, has shown that smile-sheets can be improved with a few well-designed questions focusing on:

  1. The effectiveness of the learning
  2. Support for the learner
  3. The learning’s reputation

One key tip is to change the answer options from agree/disagree to statements that relate more closely to the learner’s context:

SMILES~1

The data such questions give is still only limited to the learner’s reactions but the learner’s motivation to apply the learning (question 2) does predict actual application, and a self-assessment of the support the learner can expect (question 3) is relevant for whether the learning will have an effect.

 

Measure Learning – level 2

Question 1 (above) is as good a question as possible for testing the participant’s beliefs about what has been learned, but we know that learners are overly optimistic and lack knowledge of learning. It is important to move to level 2, and test remembering and understanding a few days or more after the learning has taken place. Such a test of knowledge and retention is straightforward and easy to apply digitally.

MEASUR~2

Reacting positively and retaining Learning is good, but not enough. Behaviour has to change if Results are to be created.

 

Measure Behaviour- level 3

The process below is a simple approach to using 3600 test and retest to measure whether the individual leader changes their behaviour as a consequence of participating in a leadership development programme.

 

test retest

 

The secondary benefits of running such a process, higher expectations and more support, are equally important as solid measurement. The stakeholders involved in the anchoring step include people the participants see as important. These stakeholders can set expectations that influence participants’ behaviour, increasing their motivation to get whatever they can out of the leadership development programme.

The involvement of more stakeholders to go through the results of the 3600 tests and retests ensures the participants receive the support they need to learn and to apply that learning. In combination, higher expectations and more support increase the effectiveness of the leadership development programme.

The test-retest process gives solid data on whether leaders have developed and changed their behaviour as a result of a development programme. The data and graphs are very helpful in communicating about the programme to stakeholders.

SOLIDD~1

The discussions between a leader and their own boss, colleagues or reports ensure the leader also receives feedback about issues the test does not manage to address. However well-designed, a 3600 test read-out doesn’t capture everything a colleague might want to say to a leader.

DISCUS~1

 

Measure Results – level 4

Reliably measuring whether changes in behaviour impact tangible results like more efficiency, stronger innovation or happier colleagues is very challenging. In our article How to make leadership development effective we made the point that there are so many factors that influence efficiency, or any other major result, that organisations find it simply too difficult.

That is true. But there is hope.

HOPEPH~1

We have 2 suggestions. The first based on a survey, the second on projects.

 

1. Measuring with qualitative assessment

Asking stakeholders which factors have contributed to a particular Result, and to what extent, gives a qualitatively informed assessment. It isn’t scientific proof, but it is a strong indication from the internal experts the organisation trusts. Asking the same people who were involved in the 3600 test-retest process ensures that they have the insight necessary to judge whether the leadership development programme has contributed significantly.

Each survey has to be designed to fit the specific context. The below is a mock-up to give you an idea:

Survey mock-up

 

To make it easier for you to adapt and apply this method in practice we list the principles we have based this mock-up on:

  1. The Result that participation is expected to contribute to is pre-defined as part of setting a baseline, itself measurable, and linked to a specific time period.
  2. The goal of the survey is clearly stated
  3. There can have been more than 1 participants from the defined Organisational Unit in the defined leadership development programme
  4. List of main factors thought to have an impact on the Result:
    • Limited to the 7 most relevant, i.e. the number of information units an individual can hold in their head at any one time, allowing them to rank order the factors in their head as they work through their answers. A longer, collectively exhaustive list doesn’t add value and drowns out the leadership perspective
    • Include a factor concerning what the organisational unit cannot control, in this case market dynamics
    • Include both Leadership and the hoped-for improvement to leadership through participation in the specific programme
  5. The respondent must be able to add factors not on the list
  6. Scale:
    • Based on importance, which is easy for the respondent to rate
    • All factors listed are expected to be important so lowest score ‘not very important’ rather than ‘not important’
    • It is likely that participation in a leadership development programme will not be viewed as very important. It is therefore important to give the respondent an opportunity to distinguish between lower levels of importance. This is done here by using ‘not very important’ vs. ‘some importance’, which are close to each other.

 

2. Measuring the effect of development projects

Some leadership development programmes include real-life projects where the participant can try out new learning and behaviours. These projects can be of considerable value. As an example, in a programme in a national Post Office each leader’s programme-related project had an average value of €20,000. With 25 participants, the programme’s projects for each cohort added half a million euros in value.

Most of these projects would not have been run without the programme. The project value is not a measure of the impact of the leader’s change in behaviour, but it does give an answer to how to measure the impact of leadership development programmes. Measurable ROI for running a cohort through the programme for the Post Office was:

ROI

230% is a very handsome return for the Post Office.

 

Measuring by making the first 100 days into a project

Another example of applying the project approach can be seen in the first-100-days segment. Approaches in this segment address the challenges and potential a manager faces in a new role.

The first-100-days segment:

  • Up to 40% fail and are out of their new role within 18 months, yet there is great potential for the arrival of a new manager to trigger significant performance improvement.
  • If a manager succeeds at the start, they set themselves up for good performance for the 4 to 5 years they are in the role.
  • First-100-days programmes could be the most effective form of leadership development possible.

The digital coach ELLA is an example of a product in the first-100-days segment. Ella helps organisations provide the kind of support managers need to make a success of their new roles within 100 days.

ELLA encourages users to treat their first 100 days as a project, evaluating their progress and thinking through next steps every week. The key deliverable is Results by Day 70:

The figure describes Best Practice for a manager succeeding in a new role.

Best practice-1

 

The early win delivered by Day 70 does not have as high an average value as the projects run in traditional leadership development programmes, which are driven by established managers over more than 6 months. Early wins do however establish the new manager both in their role and with their team. Together they set the platform for delivering far more value than that produced by a single limited project.

The short-term ROI is easy to calculate. An early win can, for example, have a value of €5,000. Given that Ella costs about the same as a mobile phone, ROI can be calculated to be about 400%.

 

Conclusion

It is possible, with limited effort, to improve methods for measuring the impact of leadership development. This article has pointed to solid recommendations at 4 levels, i.e. in Reactions, Learning, Behaviour and Results.

If you want to know more about the first-100-days segment please check out our e-book.

Free ebook about how leadership success is created in a new role

Richard Taylor C. Psychol., MBA

Richard Taylor C. Psychol., MBA

Organisational psychologist with an MBA. Broad top management experience spanning many industries, functions and countries, including 10 years with corporate responsibility for HR. Extensive experience as consultant in private, public and voluntary sectors. A number of board positions in the education and culture sectors. Started career as counsellor for drug and alcohol abusers.

Follow our blog


We write professional blogs worth a read.
Follow the blog for a sneak peek of the future!

You may also read


The Google method of effective leadership development

Nov 01, 2020 - Richard Taylor C. Psychol., MBA

The Google method of effective leadership development

We all use google searches to find answers to our questions. We often look at the videos google suggests for..

The most effective leadership development approach possible

Nov 01, 2020 - Richard Taylor C. Psychol., MBA

The most effective leadership development approach possible

Many are sceptical to the return on the investment made in leadership development. A promising new approach..

How to avoid recruiting the wrong leader

Nov 01, 2020 - Richard Taylor C. Psychol., MBA

How to avoid recruiting the wrong leader

A ‘recruiting error’ is seldom an error of selection. The real error lies in not helping the new recruit..