Using Measurement to Promote Learning Effectiveness

What’s the relationship between measurement and learning effectiveness? When most L&D practitioners think about measurement and learning effectiveness, they think about outcomes measures. Examples include: Course enrollment and completion -- although, as we all recognize, this metric does not say anything about learning effectiveness, just that the training was completed (or not). Level I (smile … Continue reading Using Measurement to Promote Learning Effectiveness

Delivering Microlearning for Sustained Learning Effectiveness

How to Determine What Microlearning is Effective and What’s Not There is abundant evidence that breaking up longer learning modules and delivering them as smaller units over time is an effective learning strategy. But there is more to effective microlearning than creating and distributing small learning nuggets. When we designed Intela as a second-generation microlearning … Continue reading Delivering Microlearning for Sustained Learning Effectiveness

Using Confidence-based Knowledge Checks to Sustain Learning

Measuring learning immediately following a learning experience (workshops, eLearning, etc.) is standard practice -- but it’s not sufficient. Too often, we perform this immediate measurement, assume that we have achieved our learning goals, and move on. But, for learning to be meaningful it must be persistent over time – it must be sustained. To check … Continue reading Using Confidence-based Knowledge Checks to Sustain Learning

Which Works Better: Learning Objectives or Pre-Quizzes?

As readers of this column, and anyone who has ever attended my one of my workshops knows, I am a limited fan of learning objectives (LOs). I do think they are useful, and even essential, for two purposes: Structuring content and ensuring all important content is covered when creating learning materials.Creating fair, valid, and reliable … Continue reading Which Works Better: Learning Objectives or Pre-Quizzes?

Creating Valid Skills/Performance Evaluations

In our previous two posts we reviewed the problems that occur when using rating scales for evaluations. First, we discussed (The Problem with Rating Scales) the difficulties that arise when rating data, which is ordinal, is treated as if it was ratio data prior to its use in mathematical operations, such as averaging. Then, we … Continue reading Creating Valid Skills/Performance Evaluations

Reporting Level One Evaluations

In our last blog post we pointed out the many problems with rating scales. We hope we convinced you that doing traditional math on these scales is a mathematically invalid exercise. So, the question becomes: How do we report results from Level One and Skills Evaluations? The answer is somewhat different for each. We will … Continue reading Reporting Level One Evaluations

The Problem with Rating Scales

What’s wrong with rating scales? A lot. They are ubiquitous in learning and assessment, appearing primarily in two types of evaluations: Level One “smile sheets”Skills evaluations The same criticisms apply to both usages, though the proposed solutions are different. In Level One evaluations rating scales are primarily used as responses to statements about a learning … Continue reading The Problem with Rating Scales

Randomizing vs. Subsetting Exam Questions

In discussions with clients I notice that some people inadvertently use the terms randomizing and subsetting interchangeably, though they are really quite different with entirely different consequences for exam validity. Randomization means that all test takers get the same questions but in different order. Subsetting means that each test taker gets a different subset of … Continue reading Randomizing vs. Subsetting Exam Questions