I think we all have an intuitive sense that learning should be difficult, but not too difficult. If, for example, you enrolled in a third grade math class you would likely do very well but wouldn’t learn anything you didn’t already know. On the other hand, if you enrolled in a graduate course in theoretical physics the subject matter would be so difficult that you wouldn’t learn anything in this course either.
Around 100 years ago, the Russian psychologist Lev Vygotsky (1896 – 1934) coined the term “Zone of Proximal Development,” meaning that learning is maximized when we pitch the learning level just beyond, but within reach, of the learner, like so:
More recently, in 1994, Robert Bjork, a professor at UCLA, coined the term “Desirable Difficulty,” meaning that learning is optimized when we are presented with challenges that are neither too easy nor impossible, but just at the right level of difficulty (sort of the Goldilocks Effect). He recommends activities, such as quizzing, spacing and interleaving, all of which have been discussed in this blog.
So, is there a “difficulty” sweet spot, and if so, what is it?
In this study:
the researchers used machine learning algorithms to “train” AI programs in simple tasks (e.g. recognizing hand-written digits). What they discovered in a series of experiments was that the machine learning algorithms learned most quickly when they tuned the difficulty of the training data to create a 15% error rate. Both higher and lower error rates slowed down the learning process. In fact, when they adaptively adjusted the degree of difficulty to 85%, learning occurred exponentially faster than learning at a fixed difficulty, either higher or lower.
Of course, these results were based on machine, not human, learning. So, what then are the implications for human learning? The authors argue that their machine learning algorithms mimic human learning strategies. If so (and this is one study) this tells us something about where to target our learning.
When I work with clients to validate exams a typical passing score is 90% for a certification exam. This makes sense at a mastery (summative) level — assuming this value was set using a valid (e..g. Angoff) process. But if the results of this research are accurate, during the formative testing process (during learning), our learning is at the optimal degree of difficulty if formative exam results are more in the 85% range. Much lower or higher than that and we can infer that the learning level is outside the Zone of Proximal Development.
Fascinating ideas. I think it leads to more questions, however.
How do we dial in a level of difficulty for a human being? After all, people are not inclined to continue learning if they keep failing. A calibration test needs to be short enough to keep the learner from abandoning the test but long enough to get useful information.
How do we ensure that these results can transfer to humans? Is the AI’s learning methodology similar enough to humans to even be able to make that analogy?
I have long enjoyed Vygotsky’s Zone of Proximal Development and I often use it to dispel the myth that we can easily sort learners into levels (e.g. advanced, beginner, intermediate, etc). Instead, we need to approach learners by first understanding their needs and then addressing those needs.
LikeLike