The title of this post was going to be Evaluating our Training, but the core message is more than simply evaluation … it’s about setting up reliable feedback loops from all the stakeholders to create positive improvement. Please note that I refer to positive improvement in terms of effectiveness, not a tick mark on a report for training completion.
How many times have you provided training, conducted your little satisfaction / request for improvement survey, and then moved on to the next project within your organization?
How many times have you done that with a client and left them without setting up internal feedback loops that they can manage?
The number is painful to consider, and here are some ways to implement feedback loops that serve all the stakeholders.
The first value of any training is providing excellent material in the first place. I’ve already written a series on developing training courses so that they are scalable, repeatable, and easily adjusted.
The second value of any training is implementing the training experience well. Kirkpatrick gives us 9 components that setÂ us nicely up for easy evaluation (Why, Outcomes, Timing, Environment, The Right People, Effective Instructors, Diverse Learning Experience, Assessment, Participant Satisfaction). A blog post could be dedicated to each of these components, but I’d like to make a quick point on three of them.
- Explaining “why” the training is occurring allows a state of play for self-driven learning and improved optimization by the participants. This does, by the way, kill a lovely amount of PowerPoint driven training materials.
- Creating outcomes (objectives) should be clearly focused on the goal of increasing knowledge, or improving skills, or changing attitudes. These classifications should be clear to the participants too.
- Assessing can be achieved formally or informally, although when working with adults, please consider authentic assessment rather than testing people’s abilities to take tests and missing the point.
The third value of any training is ensuring the appropriate mindset of the instructor, which is covered nicely by Clark. All of us will likely beat a drum that it’s about actually learning the content (not just covering it through slides), but then, why do we see so much slide driven material with glazed participant eyes? It’s crucial that teaching mindsets include the relevancy of their audience, provide examples and modeling, use visualisation practices, and actively scaffold the learning process based on where the participant’s knowledge, skills, and attitudes are.
Now we are to the fourth value ofÂ setting upÂ effective evaluation! Let’s return to Kirkpatrick, only this time focusing on his four levels of evaluation, outlined nicely by Nicole Legault.
You might not be surprised that that many training programs do not validate participant’s change in behaviors, which is the responsibilities of the trainers and leaders. However, trainers often consider their job done at learning measurement while leaders often consider their job to be quantitative impact analysis.
As I stated in my white paper concerning constraints and solutions for adjunct faculty, the behaviorial level is crucial as it affects morale, retention, better customer service, and reduced waste, according to Kirkpatrick.
Performing the values outlined and setting upÂ each level of Kirkpatrick’s model to ensure feedback loops maximises the learning experiences to be relevant and continuous.