Admit it – when you hand out those post-course evaluation forms (or as some like to call them, “happy sheets”), are you really just going through the motions? Sure, you might get a few polite comments about the coffee being nice and warm, but those happy sheets are about as useful as a chocolate teapot when it comes to genuinely measuring the effectiveness of your training.
It’s time to ditch the shabby happy sheet and start taking a more intelligent approach to post-course assessment. But before we get to the good stuff, let’s have a laugh at some of the most ridiculous questions people still insist on asking on those tired old forms.
The “Too Long or Too Short?” Two-Step
Asking delegates whether a course was too long or too short is about as pointless as asking a centipede which foot it favours. You’re bound to get a whole range of conflicting answers based on personal preferences and learning styles.
For instance, an Activist learner (as defined by the brilliant Honey and Mumford model) is likely to find even a half-day course dragging on too long, as they crave variety and can’t stand hanging about once they’ve got the gist of something. On the other hand, a Reflector will probably want to mull things over at length, soaking up every last detail like a sponge in a bucket.
So, by asking this question, you’re essentially setting yourself up for a lose-lose scenario – no matter what you do, you’ll end up frustrating at least a portion of your learners. Smart move!
The “Facilities Faff”
Unless you’re running courses in a dilapidated shed with no heating and a leaky roof, questions about the quality of the training facilities are largely irrelevant. We’re not talking about assessing the suitability of the British Grand Prix venue here – even a basic meeting room should provide an adequate environment for learning if the content and delivery are up to scratch.
The “Trainer Trap”
Asking learners to rate the trainer’s performance is only going to give you a superficial, subjective view that tells you very little about the true impact and effectiveness of the training itself.
Just because someone can spin a few jokes and looks pretty slick it doesn’t necessarily make them an exemplary trainer. Likewise, you could have the dullest, most monotone presenter in the world, but if they’re imparting knowledge that truly sticks and drives meaningful behaviour change, then who cares about their lack of razzle-dazzle?
The “Content Conundrum”
Simply asking if people found the course content useful or relevant is about as insightful as asking if they’d like a nice cup of tea. Of course, they’re going to say yes – why else would they have signed up in the first place?
A much better way to gauge the true value and impact of your learning content is to put it in a real-world context that actually matters to your learners’ roles and responsibilities.
So, there you have it – a few classic examples of post-course evaluation questions that really ought to be confined to the dustbin of history. But it’s not all doom and gloom – there are much more effective ways to analyse the impact and success of your training initiatives.
Enter the Learning Delta Model
We’ve been working at this topic for nearly 25 years, and (of course) we think we might have the answer! It’s a simple solution that we call the learning delta analysis. This is how it works:
After the training intervention we ask each candidate to evaluate their learning based on a ten point range. For each topic covered we ask attendees to assess their pre and post training knowledge and application, subtract one from the other and arrive at a something we call the ‘learning delta score’ for that topic.
For example we may have covered ‘the four levels of delegation’ in the course; so an attendee may have indicated that before the course they would have scored a 3 for knowledge and application, and after the training a 7, that leaves us with a learning delta of 4 (7-3=4).
Collected anonymously this allows us to analysis data such as:
- The number of improvement points of learning for the whole programme.
- The individual ranges from the lowest to the highest learners.
- The average delta learning points per individual.
- The most impactful topics, down to the least impactful.
Some times it is easier to have look at an example of this, so if you are interested please contact me and I can give you more insights. Effectively though, this is way more useful than traditional happy sheet outputs.
So, here’s my challenge – ditch the happy sheet humbug and start evaluating your training initiatives in a way that genuinely matters. Your learners, your business, and (most importantly) your own professional credibility will thank you for it.
Love to hear your thoughts…
Bob Bannister



