To review 15 years of research on self-efficacy to contrast it with related constructs and examine its effect on academic motivation.
Overview of Self-Efficacy Theory
Self-efficacy is a person’s judgement of their ability to achieve goals or overcome obstacles. According to Bandura’s (1986) self-efficacy theory, learners develop self-efficacy through several different channels. The strongest predictor of self-efficacy is perceived performance and accomplishments. Success, especially on difficult tasks, improves self-efficacy, but receiving external assistance can negate this effect. A weaker contributor to self-efficacy is observing others succeed, especially if the person is perceived to be similar in ability. Similarly, external persuasion and encouragement, especially by role models, can boost self-efficacy temporarily, but it must be accompanied by later accomplishments on authentic tasks to be sustainable. The last source of information that students use to develop self-efficacy is physiological and emotional experiences. If students feel physically sweaty or emotionally anxious while working on problems, these experiences can translate to low self-efficacy. Alternatively, feeling excited or experiencing flow can translate to high self-efficacy.
To consider tradeoffs between learning and performance and examine instructional strategies that support both.
Kapur researches an instructional strategy called productive failure. Productive failure encourages learners to create incorrect or incomplete solutions, get stuck during problem solving, or otherwise fail to produce a right answer when they first start learning a new procedure. The underlying theory is that this strategy encourages students to try to apply their prior knowledge to the problem, recognize whether it works, and identify the new knowledge they need to complete the solution. Once learners have gone through this process of failing, they are primed to fill in the gaps in their knowledge through instruction. A critical feature of productive failure is that the failure during the problem-solving phase is followed by productive learning during instruction, called the consolidation phase.
A ton of instructors at all levels of education (including adult education) are suddenly being forced to teach through online media as a result of the pandemic. As someone who teaches online and researches online learning, I want to be helpful without being overly prescriptive and making this transition even harder. As many others have pointed out, instructors aren’t engaging in online learning as much as they are suddenly being forced to teach at a distance.
Since many universities have decided to offer summer courses online (and some are looking at the fall), we could be teaching online for a significant time. If you’d like some tips for effective online learning, I’ve compiled a list specifically for this circumstance. I put them roughly in order of importance, so if you want to tackle one each week, start from the top. Continue reading
To evaluate whether explicit instruction followed by problem solving or problem solving followed by explicit instruction is more effective for later problem-solving performance, especially for procedures that are inherently complex.
Order of Problem Solving and Explicit Instruction
The debate between explicit/direct instruction and minimal instruction is longstanding in problem-solving education. Those who support direct instruction (i.e., explicitly telling the learner everything you want them to know) cite its efficiency for producing gains in problem-solving skills. Those who support minimal instruction (i.e., providing scaffolding to the learner to encourage them to construct problem-solving knowledge themselves) cite the enduring effects of building upon prior knowledge and development of other skills throughout the process. A subgroup has decided both types of instruction are important and now debates in which order learners should receive both types of instruction.
To propose a theory of the cognitive mechanisms responsible for the relationship between spatial skill and STEM achievement.
Spatial Skill and STEM Achievement
Decades of work show that high achievers in STEM, e.g., chemistry, physics, geology, computer science, have high spatial skill, e.g.,
- spatial visualization, like mentally rotating or folding an object
- spatial relations, like using a map to plan a route
- spatial orientation, like following a route.
Moreover, improving spatial skill through training broadly improves STEM performance. This type of broad transfer from one type of cognitive training to many types of problem-solving tasks across multiple domains is exceedingly rare. I can’t think of a factual analogy for it, so I’ll give you a fake one. It’s like if memorizing the digits of pi helped you solve any problem that included a number, whether it was in solving equations in math, finding proportions in art, using measurements in science, or counting beats in music. We still don’t understand the cognitive mechanisms responsible for this relationship, though, so I pulled together literature from psychology, discipline-based education research, learning sciences, and neuroscience to propose a theory. I’ll summarize the main points from each area before presenting the theory.
Preamble for those interested in how I made it through 8 years of computing education research without knowing how to program:
When I started my research career in psychology, I knew nothing about computer science. I had chosen to do my PhD with Richard Catrambone, a world-class cognitive scientist doing cool work at the intersection of cognitive psychology and educational technology. In my first month, I agreed to be a research assistant on a project about applying educational psychology to computing education between Richard and Mark Guzdial, a renowned computing education researcher. To me at the time, Mark was just some professor, and computing *probably* had to do with computers. I still remember our first meeting when Mark asked me if I had any programming experience. I said I had worked a little bit with HTML (and not that it had to been to customize my MySpace page). He gently told me that didn’t really count for what we were doing, and I tried to figure out why but couldn’t. That’s how little I knew.
So how on Earth have I conducted computing education research from that day forward? Partly with fearlessness stemming from sheer ignorance, but mostly with tons of help from people with loads of experience and knowledge about computing and teaching computing. While at Georgia Tech, I worked with Mark, Briana Morrison, and Barbara Ericson, who each have more computing education knowledge than any one person has a right to. Working with them, the most valuable perspective I had was as a novice. I could empathize with learners because I knew just as little as they did. Continue reading
Briana Morrison, Adrienne Decker, and I have a couple of conference papers coming out this summer based on our work funded by NSF IUSE (1712231 and 1927906) for developing and testing subgoal labeled worked examples throughout introductory programming courses. We’re finishing up the second of three years on the project, so now is a good time to provide an update on the project and summaries of the papers.
Our goal for this project was to identify subgoals for programming procedures taught in introductory programming courses, create subgoal labeled materials that are easily adopted by other instructors, and test the efficacy of those materials for learning. During the first year of the grant, we conducted the Task Analysis by Problem Solving protocol to identify the subgoals of intro programming procedures. I’ve been holding back from taking an intro programming course for years just so that I could be a novice for this activity. For the task analysis, Briana taught me the basics of a semester-worth of programming in Java in one week while I visited her in Omaha. If there is anyone who could teach a semester in one week, it’s Briana. After the task analysis, we used the subgoals to develop subgoal labeled worked examples and practice problems that started from the most basic problems and gradually increased in complexity.
To explore how physical representations of symbolic relationships affect accuracy in symbolic reasoning.
Symbolic reasoning is a higher order thinking skill that allows us to reason about abstract structures and relationships without relying on a concrete, physical context (e.g., evaluating whether a + b * c + d = d + c * b + a). Because symbolic representations are still physically written on paper, screens, etc., Landy and Goldstone argue that they are subject to biases based on their physical representations, despite differences in physical representations being completely irrelevant to symbolic reasoning. For the equation above, most people would more quickly evaluate whether a + b*c + d = d + c*b + a, even though the space between symbols does not affect their relationship. Another example is data visualization. Despite representing the same data, different graph designs can affect how viewers interpret data.
To explore the impact on performance and affect of explaining and correcting worked examples that include errors compared to practicing problem solving.
Erroneous Examples and Misconceptions
Erroneous examples, or worked-out solutions to an example problem that include at least one incorrect step, have been studied as a way to address misconceptions. Misconceptions can be hard to remedy with direct explanations. Instead, it is often more effective to allow the learners to uncover the logical flaw that disputes a misconception.
For instance, a common misconception in biology is that trees grow from nutrients that they pull from the soil. If an instructor explained that trees grow by breathing in CO^2 from the air, retaining the carbon, and breathing out O^2, a biology student is likely to forget the correct explanation. Instead, if the instructor asks what trees are made out of (carbon), what a tree breathes in (CO^2), and what a tree breathes out (O^2), then the student makes the conclusion that trees grow from carbon in the air and is more likely to remember the correct explanation long term.
I was a guest editor with Briana B. Morrison for a special issue of Computer Science Education on the topic “Advancing Theory about the Novice Programmer.” We had so many high quality submissions that it ended up being a double issue of exciting, current, and theory-driven research. In our guest editorial, we give a 1-paragraph summary of each of the six articles.
- Concepts before Coding: Non-Programming Interactives to Advance Learning of Introductory Programming Concepts in Middle School by Grover, Jackiw and Lundh
- Teaching Computer Programming with PRIMM: A Sociocultural Perspective by Sentance, Waite and Kallia
- Block-based versus Text-based Programming Environments on Novice Student Learning Outcomes: A Meta-Analysis Study by Xu, Ritzhaupt, Tian and Umapathy
- A Theory of Instruction for Introductory Programming Skills by Xie, Loksa, Nelson, Davidson, Dong, Kwik, Tan, Hwa, Li and Ko
- CS1: How Will They Do? How Can We Help? A Decade of Research and Practice by Quille and Bergin
- A Systematic Literature Review of Student Engagement in Software Visualization: A Theoretical Perspective by Al-Sakkaf, Omar and Ahmad
I don’t have permission to reprint the editorial here, but it is available for free on the journal’s website. Continue reading