To reflect on three decades of evolving research about fixed vs. growth mindsets, applications and misapplications in education, and interventions that encourage students to challenge themselves.
Fixed vs. Growth Mindsets
Carol Dweck describes fixed vs. growth mindsets as a theory of people’s beliefs about human attributes and how those beliefs affect motivation and achievement. Education researchers primarily use this theory to explain learners’ responses to setbacks, challenges, and failures while developing new knowledge and skills. In a fixed mindset, people believe that abilities are unchanging, and your initial proficiency in an area corresponds to your inherent ability in that area. Thus, when faced with setbacks, they believe that they are not suited to the task. Conversely in a growth mindset, people believe that abilities are malleable, and that you can improve your proficiency in an area regardless of your starting point. Thus, they view challenges as opportunities to develop and improve skills, including skills for which they have a natural proficiency.
Mindsets apply to people’s beliefs about human attributes outside of educational settings. Mindsets include people’s beliefs about skill in professional settings and their personalities. Whether you believe a leader is born or made depends on your mindset. In correlational work exploring the relationship between mindset and achievement, people with a growth mindset tend to achieve more in school and throughout their lives.
To review two decades of research on refutation text research in science education to determine factors that make them more or less effective.
Refutation texts are a direct-instruction approach to addressing misconceptions. They are popular in science education because, as people interact with the physical world, they develop misconceptions about how it works. When they are faced with facts that contradict this prior knowledge, they can take one of three paths according to Posner et al.’s (1982) model of conceptual change:
- The least useful path: ignore the new information because it doesn’t fit in existing knowledge structures, and thus, doesn’t make sense (this is not an entirely voluntary process)
- The most common path: develop a separate knowledge structure disconnected from the existing knowledge structure for the new information (and perhaps not realize that they are in conflict)
- The most useful but least common path: reorganize existing knowledge structures to incorporate new information (i.e., conceptual change)
Achieving conceptual change is hard work, and that’s why misconceptions are so difficult to remedy. The need to reorganize prior knowledge structures is why direct-instruction approaches, which are inherently not responsive to individual students’ prior knowledge, are often not productive. For an example, see my article summary on erroneous examples. However, refutation texts have consistently been more effective at addressing misconceptions in science education compared to expository texts, which give correct explanations only. This paper discusses how.
To review 15 years of research on self-efficacy to contrast it with related constructs and examine its effect on academic motivation.
Overview of Self-Efficacy Theory
Self-efficacy is a person’s judgement of their ability to achieve goals or overcome obstacles. According to Bandura’s (1986) self-efficacy theory, learners develop self-efficacy through several different channels. The strongest predictor of self-efficacy is perceived performance and accomplishments. Success, especially on difficult tasks, improves self-efficacy, but receiving external assistance can negate this effect. A weaker contributor to self-efficacy is observing others succeed, especially if the person is perceived to be similar in ability. Similarly, external persuasion and encouragement, especially by role models, can boost self-efficacy temporarily, but it must be accompanied by later accomplishments on authentic tasks to be sustainable. The last source of information that students use to develop self-efficacy is physiological and emotional experiences. If students feel physically sweaty or emotionally anxious while working on problems, these experiences can translate to low self-efficacy. Alternatively, feeling excited or experiencing flow can translate to high self-efficacy.
To consider tradeoffs between learning and performance and examine instructional strategies that support both.
Kapur researches an instructional strategy called productive failure. Productive failure encourages learners to create incorrect or incomplete solutions, get stuck during problem solving, or otherwise fail to produce a right answer when they first start learning a new procedure. The underlying theory is that this strategy encourages students to try to apply their prior knowledge to the problem, recognize whether it works, and identify the new knowledge they need to complete the solution. Once learners have gone through this process of failing, they are primed to fill in the gaps in their knowledge through instruction. A critical feature of productive failure is that the failure during the problem-solving phase is followed by productive learning during instruction, called the consolidation phase.
A ton of instructors at all levels of education (including adult education) are suddenly being forced to teach through online media as a result of the pandemic. As someone who teaches online and researches online learning, I want to be helpful without being overly prescriptive and making this transition even harder. As many others have pointed out, instructors aren’t engaging in online learning as much as they are suddenly being forced to teach at a distance.
Since many universities have decided to offer summer courses online (and some are looking at the fall), we could be teaching online for a significant time. If you’d like some tips for effective online learning, I’ve compiled a list specifically for this circumstance. I put them roughly in order of importance, so if you want to tackle one each week, start from the top. Continue reading
To evaluate whether explicit instruction followed by problem solving or problem solving followed by explicit instruction is more effective for later problem-solving performance, especially for procedures that are inherently complex.
Order of Problem Solving and Explicit Instruction
The debate between explicit/direct instruction and minimal instruction is longstanding in problem-solving education. Those who support direct instruction (i.e., explicitly telling the learner everything you want them to know) cite its efficiency for producing gains in problem-solving skills. Those who support minimal instruction (i.e., providing scaffolding to the learner to encourage them to construct problem-solving knowledge themselves) cite the enduring effects of building upon prior knowledge and development of other skills throughout the process. A subgroup has decided both types of instruction are important and now debates in which order learners should receive both types of instruction.
To propose a theory of the cognitive mechanisms responsible for the relationship between spatial skill and STEM achievement.
Spatial Skill and STEM Achievement
Decades of work show that high achievers in STEM, e.g., chemistry, physics, geology, computer science, have high spatial skill, e.g.,
- spatial visualization, like mentally rotating or folding an object
- spatial relations, like using a map to plan a route
- spatial orientation, like following a route.
Moreover, improving spatial skill through training broadly improves STEM performance. This type of broad transfer from one type of cognitive training to many types of problem-solving tasks across multiple domains is exceedingly rare. I can’t think of a factual analogy for it, so I’ll give you a fake one. It’s like if memorizing the digits of pi helped you solve any problem that included a number, whether it was in solving equations in math, finding proportions in art, using measurements in science, or counting beats in music. We still don’t understand the cognitive mechanisms responsible for this relationship, though, so I pulled together literature from psychology, discipline-based education research, learning sciences, and neuroscience to propose a theory. I’ll summarize the main points from each area before presenting the theory.
Preamble for those interested in how I made it through 8 years of computing education research without knowing how to program:
When I started my research career in psychology, I knew nothing about computer science. I had chosen to do my PhD with Richard Catrambone, a world-class cognitive scientist doing cool work at the intersection of cognitive psychology and educational technology. In my first month, I agreed to be a research assistant on a project about applying educational psychology to computing education between Richard and Mark Guzdial, a renowned computing education researcher. To me at the time, Mark was just some professor, and computing *probably* had to do with computers. I still remember our first meeting when Mark asked me if I had any programming experience. I said I had worked a little bit with HTML (and not that it had to been to customize my MySpace page). He gently told me that didn’t really count for what we were doing, and I tried to figure out why but couldn’t. That’s how little I knew.
So how on Earth have I conducted computing education research from that day forward? Partly with fearlessness stemming from sheer ignorance, but mostly with tons of help from people with loads of experience and knowledge about computing and teaching computing. While at Georgia Tech, I worked with Mark, Briana Morrison, and Barbara Ericson, who each have more computing education knowledge than any one person has a right to. Working with them, the most valuable perspective I had was as a novice. I could empathize with learners because I knew just as little as they did. Continue reading
Briana Morrison, Adrienne Decker, and I have a couple of conference papers coming out this summer based on our work funded by NSF IUSE (1712231 and 1927906) for developing and testing subgoal labeled worked examples throughout introductory programming courses. We’re finishing up the second of three years on the project, so now is a good time to provide an update on the project and summaries of the papers.
Our goal for this project was to identify subgoals for programming procedures taught in introductory programming courses, create subgoal labeled materials that are easily adopted by other instructors, and test the efficacy of those materials for learning. During the first year of the grant, we conducted the Task Analysis by Problem Solving protocol to identify the subgoals of intro programming procedures. I’ve been holding back from taking an intro programming course for years just so that I could be a novice for this activity. For the task analysis, Briana taught me the basics of a semester-worth of programming in Java in one week while I visited her in Omaha. If there is anyone who could teach a semester in one week, it’s Briana. After the task analysis, we used the subgoals to develop subgoal labeled worked examples and practice problems that started from the most basic problems and gradually increased in complexity.
To explore how physical representations of symbolic relationships affect accuracy in symbolic reasoning.
Symbolic reasoning is a higher order thinking skill that allows us to reason about abstract structures and relationships without relying on a concrete, physical context (e.g., evaluating whether a + b * c + d = d + c * b + a). Because symbolic representations are still physically written on paper, screens, etc., Landy and Goldstone argue that they are subject to biases based on their physical representations, despite differences in physical representations being completely irrelevant to symbolic reasoning. For the equation above, most people would more quickly evaluate whether a + b*c + d = d + c*b + a, even though the space between symbols does not affect their relationship. Another example is data visualization. Despite representing the same data, different graph designs can affect how viewers interpret data.