Delivering Microlearning for Sustained Learning Effectiveness

How to Determine What Microlearning is Effective and What’s Not

There is abundant evidence that breaking up longer learning modules and delivering them as smaller units over time is an effective learning strategy. But there is more to effective microlearning than creating and distributing small learning nuggets.

When we designed Intela as a second-generation microlearning platform, we knew that distributing small “learning nuggets” was only one piece of a larger microlearning strategy. Microlearning can be compelling, but how do we know if it’s effective? How do we distinguish effective learning nuggets that contribute to long-term learning from ineffective learning nuggets? How do we know if our microlearning results in sustained learning? To answer these questions, we must measure. For these reasons, we built sophisticated measurement capabilities into the Intela platform.

If you are using microlearning, what should you measure?

Engagement — Are the microlessons being viewed? Are they being viewed from beginning to end? Are some microlessons being viewed multiple times?

Learning – Engagement is a necessary, but not sufficient condition for effectiveness. For example, a video may be “popular” because it is humorous or uses engaging graphics, but that doesn’t mean it’s effective learning. In fact, decades of research tell us that learners are actually very poor judges of what is effective learning. Level One (Reaction) surveys do not correlate with Level Two (Learning) or Level Three (Application). Intela has three ways to measure and reinforce learning through assessments:

  • Adaptive questioning exercises
  • Flexible gamified contests
  • Powerful testing capabilities

Each of these assessment types produces abundant learning analytics to help you determine which learning has been mastered and which has not. For example:

Longitudinal Testing – Just because a learner passed a test immediately after completing a microlearning assignment, does not mean that he/she will sustain that learning over time. Intela’s assessments can, and should, be deployed longitudinally in conjunction with personalized microlearning reviews over a period of weeks and months.

Confidence-based Testing – Knowledge alone is not sufficient for performance. Knowledge must be aligned with confidence. The combination of knowledge and appropriate confidence leads to performance. Intela’s proprietary confidence-based testing analytics will enable you to measure confidence accuracy for all of your learners:

Analysis by Learning Objective: Where are your learners’ strengths and weaknesses? What topics have they mastered and where do they need remediation?

Analysis by Question: Which questions are causing the most difficulty for your learners? Which questions strongly “discriminate” between weaker and stronger learners and which should be replaced?

Intela is a second-generation microlearning platform that combines traditional microlearning (small learning nuggets, adaptive questioning exercises, gamification) with a powerful assessment engine that helps you promote and measure sustained learning effectiveness.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s