How to Determine What Microlearning is Effective and What’s Not There is abundant evidence that breaking up longer learning modules and delivering them as smaller units over time is an effective learning strategy. But there is more to effective microlearning than creating and distributing small learning nuggets. When we designed Intela as a second-generation microlearning … Continue reading Delivering Microlearning for Sustained Learning Effectiveness
Measuring learning immediately following a learning experience (workshops, eLearning, etc.) is standard practice -- but it’s not sufficient. Too often, we perform this immediate measurement, assume that we have achieved our learning goals, and move on. But, for learning to be meaningful it must be persistent over time – it must be sustained. To check … Continue reading Using Confidence-based Knowledge Checks to Sustain Learning
Neural Alignment and Learning Usually in this blog, we write about ideas and research applicable to our roles as practitioners of corporate learning. But every so often we come across a research study with results so fascinating, we are compelled to share it, even though it is unlikely to impact how you do your day-to-day … Continue reading Neural Alignment and Learning
As readers of this column, and anyone who has ever attended my one of my workshops knows, I am a limited fan of learning objectives (LOs). I do think they are useful, and even essential, for two purposes: Structuring content and ensuring all important content is covered when creating learning materials.Creating fair, valid, and reliable … Continue reading Which Works Better: Learning Objectives or Pre-Quizzes?
In our previous two posts we reviewed the problems that occur when using rating scales for evaluations. First, we discussed (The Problem with Rating Scales) the difficulties that arise when rating data, which is ordinal, is treated as if it was ratio data prior to its use in mathematical operations, such as averaging. Then, we … Continue reading Creating Valid Skills/Performance Evaluations
In our last blog post we pointed out the many problems with rating scales. We hope we convinced you that doing traditional math on these scales is a mathematically invalid exercise. So, the question becomes: How do we report results from Level One and Skills Evaluations? The answer is somewhat different for each. We will … Continue reading Reporting Level One Evaluations
What’s wrong with rating scales? A lot. They are ubiquitous in learning and assessment, appearing primarily in two types of evaluations: Level One “smile sheets”Skills evaluations The same criticisms apply to both usages, though the proposed solutions are different. In Level One evaluations rating scales are primarily used as responses to statements about a learning … Continue reading The Problem with Rating Scales
In my last blog post I made the assertion that randomizing questions does not affect exam difficulty. In other words, if Person A gets a set of exam questions in one order and Person B gets the same questions but in a different order, I asserted that both exams will be of the same difficulty. … Continue reading Does Question Order Affect Exam Difficulty?
In discussions with clients I notice that some people inadvertently use the terms randomizing and subsetting interchangeably, though they are really quite different with entirely different consequences for exam validity. Randomization means that all test takers get the same questions but in different order. Subsetting means that each test taker gets a different subset of … Continue reading Randomizing vs. Subsetting Exam Questions
I do a lot of exam validation and one of the questions I am frequently asked is: Do we need to validate ALL of our questions and exams? The answer is: It depends on what you are using the questions and exams for. Based on many years of experience here are my best practice guidelines: … Continue reading Validating Questions and Exams. How Much is Enough?