This course will become read-only in the near future. Tell us at community.p2pu.org if that is a problem.

May 5-11: Productive failure, or more instruction?


There is a debate over whether failure in learning could be productive or more instruction is actually better. Both articles look pretty interesting. You are free to choose any one. It might be good if we read different articles and bring different perspectives back to our discussion, just like jigsaw activities mentioned in the last paper we read. ;)

Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 41(2), 75-86. doi:10.1207/s15326985ep4102_1 (download, or Crocodoc -- you can also download from here)

Kapur, M., & Bielaczyc, K. (2012). Designing for Productive Failure. Journal of the Learning Sciences, 21(1), 45-83. doi:10.1080/10508406.2011.591717 (Crocodoc)

Task Discussion


  • Jennifer Claro   May 17, 2012, 5:37 p.m.

    Kapur & Bielaczyc (2012) was very interesting for me to read. The research design was very interesting and well done, what a huge project this was!

    The authors could show that their main hypothesis was supported in most, but not all, of the classes. i.e. the students that learned under the PF (productive failure) condition outperformed students that learned under the DI (direct instruction) condition on post-tests. This is despite the fact that only 2 PF groups in the most-capable school were able to solve the problems on their own during the generation phase but most DI students were able to solve them. So the main hypothesis of short-term failure leading to long-term success has some support.

    However, there are a couple of things the authors did not discuss which could have affected their results. One major one is that the PF students all worked collaboratively in small groups while all DI students worked individually. This is a huge difference is learning conditions that was not taken into consideration. Could it be the collaborative learning that lead to the eventual success of the PF students?

    Specifically, the students in the PF condition were all exposed to multiple RSMs. All groups came up with various strategies for trying to solve the problem, whereas with the DI students, only 1 method (1 RSM) was used. This opens up the question of whether it could be exposure to multiple RSMs that lead to the eventual success of the DF students.

    One related question is if the PF students were GIVEN the various RSMs, would they have had the same eventual success? Or is it student generation of the RSMs that leads to success? (I suspect the latter, but this is not addressed in the study.) This is a big factor, because the groups that generated the most RSMs were the ones who were eventually the most successful. If all groups were given the same RSMs, what would be the effect on learning outcomes? The big question for me is, is it exposure to multiple RSMs which explains the eventual success of the PF groups? Is the key variable “exposure to multiple RSMs”, or is it “initial failure”? What is the effect of initial failure? How about initial success instead?

    For me, the results can be at least partially explained by the fact that students who collaborated were all exposed to multiple RSMs (most of which failed). They were then taught the canonical RSM, which they all learned and could use well to solve the problems in the post-test. Another question is what learning outcomes would have been if the students had instead been exposed to RSMs which were initially successful? This experimental design was set up so that students would initially fail (2 groups did not, they succeeded). What if it was set up so that students would initially succeed?

    I cannot agree that the results of the study are based solely on PF students first experiencing failure. Confounding variables are collaboration (PF) vs. non-collaboration (DI), multiple RSMs (PF) vs. one RSM only (DI), and the absence of the study of the possible effect of successful initial RSMs on eventual success. Nevertheless, I think this is a very interesting study that raises many questions about how we can best support and structure (or unstructure) student learning.

  • Jessy Kate Schingler   May 18, 2012, 11:37 a.m.
    In Reply To:   Jennifer Claro   May 17, 2012, 5:37 p.m.

    jennifer, this was a fascinating summary of the article! it was almost so useful i feel like i don't need to read the article, but actually i am extra motivated to read it now because i want to see the descriptions of the experiments. (i also don't know what an RSM is :p).

    i think your point about the collaboration between the PF group is excellent. i'll comment more after i've read the article. thanks for the inspiration :)

  • Jennifer Claro   May 17, 2012, 1:32 a.m.

    The crodoc link at the end of this citation above Kapur, M., & Bielaczyc, K. (2012). Designing for Productive Failure. Journal of the Learning Sciences21(1), 45-83. doi:10.1080/10508406.2011.591717 is a link to another article we discussed earlier. 

    I made a new crocodoc link for the Kapur & Bielaczyc article here: http://personal.crocodoc.com/qXSW3LF

    I'm halfway thru this article now and it's very interesting. I'll post a comment on Thu or Fri.

  • Rebecca Cober   May 15, 2012, 7:08 p.m.

    Productive Failure:

    Okay, so I first heard the term "productive failure" at this year's AERA 2012. Michael Jacobson was telling a group of us about the work he is doing at the University of Sydney, that is somehow relaed to the work of Manu Kapur, who is based at the National Institute of Education in Singapore. Apparently some educators dislike the term productive failure, so in Sydney they have renamed the method to make it more palatable. I can't recall what the new term is.

    To me, the idea of productive failure makes perfect sense. Rather than taking students through a math problem step-by-step, let them work it out collaboratively first, even if they don't get it right. The point is, by tackling the problem from several different angles, they will become more flexible in their problem-solving approaches and representations. Rather than asking themselves on a test, what is the formula or strategy I should use here... they might say, well this reminds me of x, y, and z, and here are three ways I could approach the problem.

    It reminds me of one of my own high school math teachers, who, at the beginning of a unit, showed us a problem that was completely beyond us. She gave us a little while to work it out with a partner. I think she may have explained how to solve it, but then moved on to basics, and gradually progressed to more difficult concepts. Then, just before a test, she'd show us the problem we had experienced at the beginning of the unit. Piece of cake! And I remember feeling like I'd really learned something. What at first appeared to be an insumountable challenge became something that I could take on, with panache.

    I suppose the key to getting it right is a little bit like Vygotsky's zone of proximal development. If the gap between what a student can do on her own and what she can do with the collaboration of a "more capable peer" is too great, then she will fall flat on her face. But if the student already knows something about the problem and has acquired some of the skills that are necessary to solve it, then productive failure could work. The trick is, how can a teacher tailor problems to specific groups of students?

    One approach I liked in the article was the idea that instead of answering a student right away when she asks for help, a teacher could say it's okay if you can't solve it, just try multiple ways of solving it and multiple representations. In my own experience, I know I was too quick to answer students questions by giving them concrete answers.

    The productive failure approach would definitely require more effort on the part of both students and teachers, but used in the right way, and at the right time could be an extremely effective approach (one might even say productive).

  • Jennifer Claro   May 15, 2012, 11:36 p.m.
    In Reply To:   Rebecca Cober   May 15, 2012, 7:08 p.m.

    I haven't read this article yet, but I'm going to! Thanks for the comments Rebecca, you've motivated me to read the paper! I think we'll have lots to talk about then because the article I read says that guidance is necessary when the topic is new and unfamilar, but this article you've read seems to be saying that students can learn from each other, even if it's a new and unexplored area of learning for them. So, it will be cool to get a different perspective!

    I'll post within the next couple days, as soon as I've read it.

  • Jennifer Claro   May 18, 2012, 6:33 p.m.
    In Reply To:   Rebecca Cober   May 15, 2012, 7:08 p.m.

    Hi Rebecca!

    I think the term “productive failure” is a bit of a misnomer too. The goal is to get students to completely explore the problem space and generate as many RSMs(representations and solution methods, sorry Jessy! :) as possible. I don’t know how necessary failure is; of course if students keep failing they have to keep trying, so in the end they generate the maximum number of RSMs they are capable of (some groups being able to generate more than others).

    I wonder tho, what if the goal was to find as many possible successful ways of solving the problem as possible? The authors programmed failure into the research design of this project, but I’m not sure that this was necessary. One reason I have a problem with this failure approach is that you can’t use it often or students will just get tired of failing all the time and waiting for the teacher to teach them the right (canonical) way of solving the problem. It seems kind of sad for students to be trying to solve a problem that cannot be solved using the resources of the students at the time. Let them fail, on purpose, set it up so that they will fail, and then teach them how to do it? (That’s what they did in this research project, teachers explicitly taught them the “right” way to do it, after most groups had failed to solve the problem).

    Instead, how about ensuring that there is at least one possible solution that students can discover, but that finding one solution was not the goal, but to find as many solutions as possible? This way, groups still generate multiple RSMs, (which for me, should be the goal, rather than failure) and have the chance of actually being able to solve the problem. This seems much more productive to me than failure. And students will not get tired of succeeding, as they will with failure.

    Rebecca wrote, “Rather than asking themselves on a test, what is the formula or strategy I should use here... they might say, well this reminds me of x, y, and z, and here are three ways I could approach the problem.” I think so too! So the goal should be getting students to tinker with problem spaces and generate as many RSMs so that when they are faced with new problem spaces, they can use the strategies they have honed in previous RSM generation. I’d like to see an experimental research design that has PF groups as well as groups that have problems with at least one possible solution, and compare the learning outcomes.

    I think you’re right too Rebecca in that we teachers tend to answer questions too quickly sometimes. But I think this depends on the student. Some students will just give up if they don’t get it after a while, and motivation is something teachers want to keep high, so a bit of guidance (even a lot, depending on the student) at the right time is, I think, excellent use of Vygotsky’s ZPD. A little bootstrapping can go a long way! But this can definitely be overdone, with teachers answering questions students have hardly tried to answer themselves. I have done this many times too! Really, I think sometimes students usually need "Time on Task" more than teacher's help!

    This was a really interesting article! The importance of student generation of multiple RSMs was made very clear to me. Thanks to Bodong for suggesting it. 

  • Thieme Hennis   May 11, 2012, 4:30 a.m.

    I will read the papers some time later, but would like to chip in my own experiences in a project where we aim to support autonomy and self-directed learning among school dropouts, most of which are inexperienced youth with low self-esteem and little motivation to learn. At the start of the project, we asked them to come up topics they really like, are interested in, or good at. From that, we asked them to form groups to collaborate on personally relevant projects. I could tell a lot more about the project and will do at a later time, but our first experiences is that self-directed learning is very difficult and requires a clear structure and planning and supervision from teachers, but on the other hand that it may also just happen without much intervention if students are able to do something they really like and which relates to them. Although the learning takes form of doing a 'personal project', the process itself contains lots of elements of learning, such as planning, communication skills, presentation, writing, research, etc. So far, I will definitely take these papers into account in our next project which also relates to this.

    Another paper with an interesting discussion is psted here

    • Blending works: This study adds to a growing consensus around the conclusion that the most effective type of instruction combines the online and face-to-face environments. Other meta-analyses which reach this conclusion are Bernard et. al (2004) and Zhao et. al (2005). 
    • Adapting instruction works: Means’ work supports the thoughtful adaptation of instructional methods and materials to the online environment, and forthcoming research by OIT’s Research and Evaluation Team reaches a similar conclusion.
    • Self-directed online learning is not the best: Collaborative or instructor-directed online learning achieved results superior to those attained through independent, self-directed online learning, which may provide a partial explanation for why online learning has not proven to be a money-saver for cash-strapped educational institutions.
  • Jessy Kate Schingler   May 12, 2012, 4:19 p.m.
    In Reply To:   Thieme Hennis   May 11, 2012, 4:30 a.m.

    thieme, this is a super interesting report and related study, thanks for posting it! one perhaps biased reaction, is that i think it is somewhat unfair to compare independent learning against structured/guided learning, in arbitrary situations. in my experience, independent/self-directed learners are more the exception than the norm: the average learner (as socialized in today's society, anyway) is used to and expects some direction. so it is not surprisng than in this case, self-directed learning techniques result in poorer outcomes.

    what if we could compare self-identified self-directed learners with structured learning? for example, if you took someone who had been teaching themselves programming for 4 years and someone who took an undergraduate degree in computer science, how could we compare their knowledge? IS it comparable?

    my hypotheses is that the indepndent learners would have pockets of highly specialized knowledge that differ from person to person (even within CS), whereas the structured learners would have a more even keeled, slightly shallower level of knowledge on average.

    in this case i guess, success of a learning practice is only meaningful with respect to the intended use or outcome. the review didn't describe the specific learning objectives, but if the above hypothesis is accurate, then it's easy to see why independent learning would be unlikely to yield strong results against any fixed set of objectives.

    i would be really interested to know if there has been any work in this vein, that provides a broader framework for learning success.

  • Jennifer Claro   May 5, 2012, 10:26 p.m.

    I found Kirschner, Sweller and Clark (2006) very interesting because of the focus on working memory and how its limits impose restrictions on how much can be learned while engaged in PBL and other minimal guidance instructional approaches. The authors show that to not provide direct instruction and guidance to novice learners (anyone learning anything for the first time) is unlikely to result in much learning, as working memory becomes overloaded. Novice learners do not have the schemata that experts do. Without these schemata, novice learners cannot use their long-term memory to structure their current experience must use their working memory solely for searching the problem space for relevant information, and the lesson is lost.

    The authors make their case strongly, citing many research studies. But for me, they downplay the expertise reversal effect. The title of the article is “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching,” and the authors clearly point out the inadequacies of these approaches for novice and/or low-aptitude students. However, these approaches are appropriate and benefit learners more than guided instructional approaches when used at the proper time, i.e. when learners have acquired the requisite knowledge to successful inquiry. 

    I'd say instead of failure that these instructional approaches have a time and a place in science classes and others, and that the focus shold be on when these minimal guidance approaches best support student learning.

  • Rebecca Cober   May 15, 2012, 10:24 p.m.
    In Reply To:   Jennifer Claro   May 5, 2012, 10:26 p.m.

    I do wonder if inquiry-based instruction works better for experienced learners or learners who are high achieving. I know that I personally would benefit from experiential, problem-based approaches when I am comfortable with the material, but in certain areas (let's say math or physics...) I would probably benefit from a guided approach, especially if I was motivated to learn.

  • Jennifer Claro   May 15, 2012, 11:27 p.m.
    In Reply To:   Rebecca Cober   May 15, 2012, 10:24 p.m.

    I think your point really of "being comfortable with" the topic is right on. If we feel comfortable with something, we can go for less guided approaches. To get to that stage, we may need a fair amount of guidance. That's one of the reasons why this article was valuable for me. It made a clear distinction between when guided and less guided (and pure discovery) approaches are appropriate. 

    I liked the psychological explanation as well. The authors used not only empirical evidence but clear and interesting points regarding memory and what students do when they are faced with an unfamiliar search area with little guidance. If they use their entire working memory searching the problem space for information, they will learn little, if anything. This is very useful information! I'll be able to use it in my class with my students.

  • Jennifer Claro   May 5, 2012, 10:05 p.m.

    Short summary of Kirschner, Sweller and Clark (2006)

    The main point of Kirschner, Sweller and Clark (2006) is that minimal guidance  during instruction is inferior to direct instructional guidance.

    The authors point out that working memory is very limited in duration (30 sec., unless rehearsed) and capacity (somewhere between 3 and 7 elements). Free exploration of a highly complex environment (as is found in PBL and other inquiry-based approaches) may overload working memory to the point that nothing may be learned, with all working memory focused on the searching of the problem space for problem-relevant information. “While working memory is being used to search for problem solutions, it is not available and cannot be used to learn” (p. 77).

    Novice learners lack the proper schema necessary for integration of the new information with prior knowledge, and thus it is novice learners who benefit most from direct guidance. “Controlled experiments almost uniformly indicate that when dealing with novel information, learners should be explicitly shown what to do and how to do it” (p. 80).

    The authors point out the “expertise reversal effect” (Kalyuga, Ayres, Chandler, & Sweller, 2003) in which guided instruction is less effective for learners with prerequisite knowledge and who are experienced in the topic area. In order for scientific inquiry to benefit students, it should be used only when students have acquired a certain understanding of and competence in the topic area . “Strong treatments benefited less able learners and weaker treatments benefited more able learners” (p. 81). As well, the way experts work in their domains is not the way that students earn in that area (Kirschner 1991; 1992), and these must not be confused.

    The authors argue that the expertise reversal effect “emphasizes the importance of providing novices in an area with extensive guidance because they do not have sufficient knowledge in long-term memory to prevent unproductive problem-solving search. That guidance can be relaxed only with increased expertise as knowledge in long-term memory can take over from external guidance” (p. 80). The authors argue for instruction based on "the facts, laws, principles, and theories that make up a discipline’s content" (p. 84).

  • Stian Haklev   May 3, 2012, 8:19 a.m.

    Are these two articles more or less oppositional viewpoints? If so, I think that's a great idea - even though it's more work for us, but rather than posting just one article, posting two that provide different viewpoints, engage with each other etc. Might make for an even more interesting weekly discussion. Thanks Bodong.