In my last post before the holidays I addressed the issue of “effect size” in learning research. It helps us answer a very important question: How do we know if a training technique is effective?
One of the problems we have in learning research is that while we do have some understanding of how people learn our theories do not rise to the level of hard sciences such as physics or chemistry or biology. In physics, for example, research results, if they are valid, can be replicated consistently. In learning, replicating research is much more difficult. There are just too many subjective variables to control for — and sample sizes often vary all over the place. Often multiple studies of the same training technique will give varying results. One might show an effect size of 1 (pretty strong), another an effect size close to zero (no effect) and still another a negative effect size (something we don’t want at all).
So if individual studies give different results how are we to decide what works and what doesn’t? This is why I like meta-analyses. Meta-analyses statistically combine the effect sizes reported across many studies into a single result. In a sense it is a “summary” result, which helps to account for subjective variations that may occur in any single experiment.
Meta-analysis has its critics but at this point it is pretty widely accepted as a valuable statistical tool. This is why when I teach my Science of Learning workshop I try to base my evidence of what works and what doesn’t on meta-analyses rather than individual studies. And when I cannot find a meta-analysis and have to rely on an individual study I make sure I point that out.
So, what about the 10,000 hour rule? Here’s a meta-analysis of research results:
And if you don’t have the patience to read the entire article, here’s the take-away:
“We conclude that deliberate practice is important, but not as important as has been argued.”