Find all other episodes of Teacher Ollie's Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie's Takeaways'. You may also like to check out Ollie's other podcast, the Education Research Reading Room, here.
Show Notes
Why minimal guidance during instruction doesn't work
Ref: Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86.
The arguments for and against minimally guided instruction
- Assertion:
The most recent version of instruction with minimal guidance comes from constructivism (e.g., Steffe & Gale, 1995), which appears to have been derived from observations that knowledge is constructed by learners and so (a) they need to have the opportunity to construct by being presented with goals and minimal information, and (b) learning is idiosyncratic and so a common instructional format or strategies are ineffective.
Response:
“The constructivist description of learning is accurate, but the instructional consequences suggested by constructivists do not necessarily follow.”
Learners have to construct a mental schema of the information in the end, that's what we're trying to furnish them with, and it turns out, the less of a schema we give them (as with minimal guidance) the less complete of a schema they end up with. Essentially, if we give them the full picture, it will better help them to construct the full picture!
- Assertion:
Another consequence of attempts to implement constructivist theory is a shift of emphasis away from teaching a discipline as a body of knowledge toward an exclusive emphasis on learning a discipline by experiencing the processes and procedures of the discipline (Handelsman et. al., 2004; Hodson, 1988). This change in focus was accompanied by an assumption shared by many leading educators and discipline specialists that knowledge can best be learned or only learned through experience that is based primarily on the procedures of the discipline. This point of view led to a commitment by educators to extensive practical or project work, and the rejection of instruction based on the facts, laws, principles and theories that make up a discipline’s content accompanied by the use of discovery and inquiry methods of instruction.
- Response:
…it may be a fundamental error to assume that the pedagogic content of the learning experience is identical to the methods and processes (i.e., the epistemology) of the discipline being studied and a mistake to assume that instruction should exclusively focus on methods and processes. (see Shulman (1986; Shulman & Hutchings, 1999)).
This gets to the heart of the distinction between experts and novices. Experts and novices simply don't learn the same way. They don't have the same background knowledge at their disposal. By teaching novices in the way that experts should be taught we're really doing them a disservice, overloading working memories, and simply being ineffective teachers.
Drilling down to the evidence:
None of the preceding arguments and theorizing would be important if there was a clear body of research using controlled experiments indicating that unguided or minimally guided instruction was more effective than guided instruction.. Mayer (2004) recently reviewed evidence from studies conducted from 1950 to the late 1980s comparing pure discovery learning, defined as unguided, problem-based instruction, with guided forms of instruction. He suggested that in each decade since the mid-1950s, when empirical studies provided solid evidence that the then popular unguided approach did not work, a similar approach popped up under a different name with the cycle then repeating itself. Each new set of advocates for unguided approaches seemed either unaware of or uninterested in previous evidence that unguided approaches had not been validated. This pattern produced discovery learning, which gave way to experiential learning, which gave way to probem-based and inquiry learning, which now gives way to constructivist instructional techniques. Mayer (2004) concluded that the “debate about discovery has been replayed many times in education but each time, the evidence has favored a guided approach to learning” (p. 18).
Current Research Supporting Direct Guidance
List is too long, here are some excerpts
Aulls (2002), who observed a number of teachers as they implemented constructivist activities…He described the “scaffolding” that the most effective teachers introduced when students failed to make learning progress in a discovery set- ting. He reported that the teacher whose students achieved all of their learning goals spent a great deal of time in instructional interactions with students.
Stronger evidence from well-designed, controlled experi- mental studies also supports direct instructional guidance (e.g., see Moreno, 2004; Tuovinen & Sweller, 1999).
Klahr and Nigam (2004) tested transfer following discovery learning, found that those relatively few students who learned via discovery ‘showed no signs of superior quality of learning'.
Re-visiting Sweller's ‘Story of a Research Program.
From last week: Goal free effect, worked example effect, split attention effect.
My post from this week on trying out the goal free effect in my classroom.
See full paper here.
David Geary provided the relevant theoretical constructs (Geary, 2012). He described two categories of knowledge: biologically primary knowledge that we have evolved to acquire and so learn effortlessly and unconsciously and biologically secondary knowledge that we need for cultural reasons. Examples of primary knowledge are learning to listen and speak a first language while virtually everything learned in educational institutions provides an example of secondary knowledge. We invented schools in order to provide biologically secondary knowledge. (pg. 11)
For many years our field had been faced with arguments along the following lines. Look at the ease with which people learn outside of class and the difficulty they have learning in class. They can accomplish objectively complex tasks such as learning to listen and speak, to recognise faces, or to interact with each other, with consummate ease. In contrast, look at how relatively difficult it is for students to learn to read and write, learn mathematics or learn any of the other subjects taught in class. The key, the argument went, was to make learning in class more similar to learning outside of class. If we made learning in class similar to learning outside of class, it would be just as natural and easy.
How might we model learning in class on learning outside of class? The argument was obvious. We should allow learners to discover knowledge for themselves without explicit teaching. We should not present information to learners – it was called “knowledge transmission” – because that is an unnatural, perhaps impossible, way of learning. We cannot transmit knowledge to learners because they have to construct it themselves. All we can do is organize the conditions that will facilitate knowledge construction and then leave it to students to construct their version of reality themselves. The argument was plausible and swept the education world.
The argument had one flaw. It was impossible to develop a body of empirical literature supporting it using properly constructed, randomized, controlled trials
The worked example effect demonstrated clearly that showing learners how to do something was far better than having them work it out themselves. Of course, with the advantage of hindsight provided by Geary’s distinction between biologically primary and secondary knowledge, it is obvious where the problem lies. The difference in ease of learning between class-based and non-class-based topics had nothing to do with differences in how they were taught and everything to do with differences in the nature of the topics.
If class-based topics really could be learned as easily as non-class-based topics, we would never have bothered including them in a curriculum since they would be learned perfectly well without ever being mentioned in educational institutions. If children are not explicitly taught to read and write in school, most of them will not learn to read and write. In contrast, they will learn to listen and speak without ever going to school.
Re-visit Heather Hill.
I asked: Dylan William quotes you and says ‘Heather Hill’s – http://hvrd.me/TtXcYh – work at Harvard suggested that a teacher would need to be observed teaching 5 different classes, with every observation made by made by 6 independent observers to reduce chance to really be able to reliable judge a teacher.'
Heather replied.
Thanks for your question about how many observations are necessary. It really depends upon the purpose for use.
1. If the use is teacher professional development. I wouldn’t worry too much about score reliability if the observations are used for informal/growth purposes. It’s much more valuable to have teachers and observers actually processing the instruction they are seeing, and then talking about it, than to be spending their time worrying about the “right” score for a lesson.
That principle is actually the basis for our own coaching program, which we built around our observation instrument (the MQI):
http://mqicoaching.cepr.harvard.edu
The goal is to have teachers learn the MQI (though any instrument would do), then analyze their own instruction vis-a-vis the MQI, and plan for improvement by using the upper MQI score points as targets. So for instance, if a teacher concludes that she is a “low” for student engagement, she then plans with her coach how to become a “mid” on this item. The coach serves as a therapist of sorts, giving teachers tools, cheering her on, and making sure she stays on course rather than telling the teacher exactly what to do. During this process, we’re not actually too concerned that either the teacher (or even coach) scores correctly; we do want folks to be noticing what we notice, however, about instruction. A granular distinction, but one that makes coaching much easier.
2. If the use is for formal evaluation. Here, score reliability matters much more, especially if there’s going to be consequential decisions made based on teacher scores. You don’t want to be wrong about promoting a teacher or selecting a coach based on excellent classroom instruction. For my own instrument, it originally looked like we needed 4 observations each scored by 2 raters (see a paper I wrote with Matt Kraft and Charalambos Charalambous in Educational Researcher) to get reliable scores. However, my colleague Andrew Ho and colleagues came up with the 6 observations/5 observer estimates from the Measures of Effective Teaching data:
http://k12education.gatesfoundation.org/wp-content/uploads/2015/12/MET_Reliability-of-Classroom-Observations_Research-Paper.pdf
And looking at our own reliabliity data from recent uses of the MQI, I tend to believe his estimate more than our own. I’d also add that better score reliability can probably be achieved if a “community of practice” is doing the scoring — folks who have taken the instrument and adapted it slightly to their own ideas and needs. It’s a bet that I have, but not one that I’ve tested (other than informally).
The actual MQI instrument itself and its training is here:
http://isites.harvard.edu/icb/icb.do?keyword=mqi_training
We’re always happy to answer questions, either about the instrument, scoring, or the coaching.
Best,
Heather
Routines.
Post from Gary Jones: Do you work in a ‘stupid' school on functional stupidity and how smart people end up doing silly things that result in all sorts of bad outcomes, one of which is poor instruction for students.
Here are two of the 7 routines that the post highlighted for avoiding functional stupidity (originally from ALVESSON, M. & SPICER, A. 2016. The stupidity paradox: The power and pitfalls of functional stupidity at work.).
Newcomers find ways of taking advantage of the perspective of new members of staff and their ‘beginners mind.’ Ask them: What seems strange or confusing? What’s different? What could be done differently?
Pre-mortems – work out why a project ‘failed’ before you even start the project. See for http://evidencebasededucationalleadership.blogspot.com/2016/11/the-school-research-lead-premortems-and.html more details