A ton of instructors at all levels of education (including adult education) are suddenly being forced to teach through online media as a result of the pandemic. As someone who teaches online and researches online learning, I want to be helpful without being overly prescriptive and making this transition even harder. As many others have pointed out, instructors aren’t engaging in online learning as much as they are suddenly being forced to teach at a distance.
Since many universities have decided to offer summer courses online (and some are looking at the fall), we could be teaching online for a significant time. If you’d like some tips for effective online learning, I’ve compiled a list specifically for this circumstance. I put them roughly in order of importance, so if you want to tackle one each week, start from the top. Continue reading
To evaluate whether explicit instruction followed by problem solving or problem solving followed by explicit instruction is more effective for later problem-solving performance, especially for procedures that are inherently complex.
Order of Problem Solving and Explicit Instruction
The debate between explicit/direct instruction and minimal instruction is longstanding in problem-solving education. Those who support direct instruction (i.e., explicitly telling the learner everything you want them to know) cite its efficiency for producing gains in problem-solving skills. Those who support minimal instruction (i.e., providing scaffolding to the learner to encourage them to construct problem-solving knowledge themselves) cite the enduring effects of building upon prior knowledge and development of other skills throughout the process. A subgroup has decided both types of instruction are important and now debates in which order learners should receive both types of instruction.
To propose a theory of the cognitive mechanisms responsible for the relationship between spatial skill and STEM achievement.
Spatial Skill and STEM Achievement
Decades of work show that high achievers in STEM, e.g., chemistry, physics, geology, computer science, have high spatial skill, e.g.,
- spatial visualization, like mentally rotating or folding an object
- spatial relations, like using a map to plan a route
- spatial orientation, like following a route.
Moreover, improving spatial skill through training broadly improves STEM performance. This type of broad transfer from one type of cognitive training to many types of problem-solving tasks across multiple domains is exceedingly rare. I can’t think of a factual analogy for it, so I’ll give you a fake one. It’s like if memorizing the digits of pi helped you solve any problem that included a number, whether it was in solving equations in math, finding proportions in art, using measurements in science, or counting beats in music. We still don’t understand the cognitive mechanisms responsible for this relationship, though, so I pulled together literature from psychology, discipline-based education research, learning sciences, and neuroscience to propose a theory. I’ll summarize the main points from each area before presenting the theory.
Preamble for those interested in how I made it through 8 years of computing education research without knowing how to program:
When I started my research career in psychology, I knew nothing about computer science. I had chosen to do my PhD with Richard Catrambone, a world-class cognitive scientist doing cool work at the intersection of cognitive psychology and educational technology. In my first month, I agreed to be a research assistant on a project about applying educational psychology to computing education between Richard and Mark Guzdial, a renowned computing education researcher. To me at the time, Mark was just some professor, and computing *probably* had to do with computers. I still remember our first meeting when Mark asked me if I had any programming experience. I said I had worked a little bit with HTML (and not that it had to been to customize my MySpace page). He gently told me that didn’t really count for what we were doing, and I tried to figure out why but couldn’t. That’s how little I knew.
So how on Earth have I conducted computing education research from that day forward? Partly with fearlessness stemming from sheer ignorance, but mostly with tons of help from people with loads of experience and knowledge about computing and teaching computing. While at Georgia Tech, I worked with Mark, Briana Morrison, and Barbara Ericson, who each have more computing education knowledge than any one person has a right to. Working with them, the most valuable perspective I had was as a novice. I could empathize with learners because I knew just as little as they did. Continue reading
Briana Morrison, Adrienne Decker, and I have a couple of conference papers coming out this summer based on our work funded by NSF IUSE (1712231 and 1927906) for developing and testing subgoal labeled worked examples throughout introductory programming courses. We’re finishing up the second of three years on the project, so now is a good time to provide an update on the project and summaries of the papers.
Our goal for this project was to identify subgoals for programming procedures taught in introductory programming courses, create subgoal labeled materials that are easily adopted by other instructors, and test the efficacy of those materials for learning. During the first year of the grant, we conducted the Task Analysis by Problem Solving protocol to identify the subgoals of intro programming procedures. I’ve been holding back from taking an intro programming course for years just so that I could be a novice for this activity. For the task analysis, Briana taught me the basics of a semester-worth of programming in Java in one week while I visited her in Omaha. If there is anyone who could teach a semester in one week, it’s Briana. After the task analysis, we used the subgoals to develop subgoal labeled worked examples and practice problems that started from the most basic problems and gradually increased in complexity.
To explore how physical representations of symbolic relationships affect accuracy in symbolic reasoning.
Symbolic reasoning is a higher order thinking skill that allows us to reason about abstract structures and relationships without relying on a concrete, physical context (e.g., evaluating whether a + b * c + d = d + c * b + a). Because symbolic representations are still physically written on paper, screens, etc., Landy and Goldstone argue that they are subject to biases based on their physical representations, despite differences in physical representations being completely irrelevant to symbolic reasoning. For the equation above, most people would more quickly evaluate whether a + b*c + d = d + c*b + a, even though the space between symbols does not affect their relationship. Another example is data visualization. Despite representing the same data, different graph designs can affect how viewers interpret data.
To explore the impact on performance and affect of explaining and correcting worked examples that include errors compared to practicing problem solving.
Erroneous Examples and Misconceptions
Erroneous examples, or worked-out solutions to an example problem that include at least one incorrect step, have been studied as a way to address misconceptions. Misconceptions can be hard to remedy with direct explanations. Instead, it is often more effective to allow the learners to uncover the logical flaw that disputes a misconception.
For instance, a common misconception in biology is that trees grow from nutrients that they pull from the soil. If an instructor explained that trees grow by breathing in CO^2 from the air, retaining the carbon, and breathing out O^2, a biology student is likely to forget the correct explanation. Instead, if the instructor asks what trees are made out of (carbon), what a tree breathes in (CO^2), and what a tree breathes out (O^2), then the student makes the conclusion that trees grow from carbon in the air and is more likely to remember the correct explanation long term.