Briana Morrison, Adrienne Decker, and I have a couple of conference papers coming out this summer based on our work funded by NSF IUSE (1712231 and 1927906) for developing and testing subgoal labeled worked examples throughout introductory programming courses. We’re finishing up the second of three years on the project, so now is a good time to provide an update on the project and summaries of the papers.
Our goal for this project was to identify subgoals for programming procedures taught in introductory programming courses, create subgoal labeled materials that are easily adopted by other instructors, and test the efficacy of those materials for learning. During the first year of the grant, we conducted the Task Analysis by Problem Solving protocol to identify the subgoals of intro programming procedures. I’ve been holding back from taking an intro programming course for years just so that I could be a novice for this activity. For the task analysis, Briana taught me the basics of a semester-worth of programming in Java in one week while I visited her in Omaha. If there is anyone who could teach a semester in one week, it’s Briana. After the task analysis, we used the subgoals to develop subgoal labeled worked examples and practice problems that started from the most basic problems and gradually increased in complexity.
We did this work over a year ago, but we’ve been sitting on it because we wanted to pilot test our materials before we started sharing them. During the second year, we tested the materials at University of Nebraska Omaha in their intro programming course, which had a really nice quasi-experimental context. The five sections of the course are all planned together, so they follow the same topics at the same rate and have the same quizzes and exams. The quizzes and exams are even graded by the same instructors and TAs across sections. Everything is the same, except that we had two of the sections use subgoal labeled materials and the other three sections used the normal materials. The results are below in the paper summaries.
Our plans for the third year are to test the materials in intro programming courses as various institutions. We are also working on a set of subgoals and materials for Python courses, which we plan to pilot in spring 2020.
ITiCSE: Margulieux, Morrison, & Decker (2019)
Our ITiCSE paper, Design and Pilot Testing of Subgoal Labeled Worked Examples for Five Core Concepts in CS1, explains the Task Analysis by Problem Solving protocol that we used to identify the subgoals for a Java-based CS1 course. We have a full-page+ figure that includes all of the subgoals that we identified. We split each procedure into both evaluating and writing tasks, and for procedures we included expressions, selection statements, loops, calling and writing methods, using objects and writing classes, and arrays.
This paper also includes a brief report of quantitative data from our pilot test. We compared quiz and exam scores between the sections that used subgoal labeled materials (n = 120) and those that had used non-labeled materials (n = 145). The main takeaway is that subgoal group performed better than the control group on the quizzes (i.e., measures of initial learning) but not on the exam (i.e., measures of summative learning). This finding aligns with the subgoal learning framework, which is specifically to help novices recognize the structure of problem solving before they have enough knowledge to recognize it for themselves. By the time students take an exam, they should have enough knowledge to nullify the benefits of subgoal labels. The most interesting finding is that students in the subgoal group had lower variance in scores on the quizzes and exams, and they were less likely to drop out of the course, suggesting that the subgoal materials particularly help students who otherwise would have struggled in the course.
ICER: Decker, Margulieux, & Morrison (2019)
Our ICER paper, Using the SOLO Taxonomy to Understand Subgoal Labels Effect in CS1, closely examines problem solving shortly after learning procedures, again comparing those who learned with subgoal labeled vs. unlabeled materials. On each quiz, which was given the weekend after learning a new procedure, we included an Explain in Plain English question to measure how students approached problem solving, like many others in programming education have. Also like others, we analyzed students’ Explain in Plain English responses using the SOLO taxonomy. The SOLO taxonomy is used to analyze responses to open-ended questions and determine how fully students have achieved learning objectives.
We found that on most quizzes, students in the subgoal group explained problem solving at higher levels of the SOLO taxonomy than the control group. On half of the quizzes, the mode score was higher in the subgoal group. Even if the mode wasn’t higher, the subgoal group had substantially more scores in the higher categories and substantially fewer scores in the lower categories than the control group. For the one quiz in which this pattern did not hold, all students performed poorly and the Explain in Plain English question was more challenging in relation to the subgoal labeled worked examples than it had been for the other quizzes (i.e., it was about nested loops when the subgoal materials were for loops). We hoped the subgoal group would still have performed better in this case, but it seems the difficulty of the problem was too high. Overall, the subgoal group was more likely to provide high quality explanations of problem-solving processes than the control group, and they were not just re-stating the subgoals.
Why this is important
We discovered three important things about subgoal labeled worked examples from this work. First, the subgoal labels had a consistent positive effect on quizzes across the semester. This finding suggests that they are useful when learning new procedures, even when students have gained experience with other procedures. Second, subgoal labels improved problem solving, but the benefit of subgoal labels diminished over time within a procedure. By the time students had completed multiple assignments and studied for a test, subgoal labels did not improve performance.
The last discovery is the topic of an in-progress paper. Subgoal labels did not improve exam performance, but they helped students make it to the exam. Dropout and failure rates in the courses using subgoal materials were about half of that in the courses using normal materials. Based on our non-peer-reviewed conclusions from analyzes of learner characteristics, subgoal labels helped mitigate risk factors that predicted dropping out or failing in the control group. For example, across all students both college GPA and expected difficulty of the course predicted lower grades. In the control group, if we look at only students who expected the course to be difficult, GPA correlated with grades. In the subgoal group, however, it didn’t. The same is true if we look at only students with lower GPA or various other characteristics that were risk factors. The pattern also holds if we examine learner characteristics of students who dropped out.
It’s difficult to make the argument that some students would have dropped out or failed if not for the intervention. Despite this, we have found such consistent results after analyzing the data in multiple ways that I really believe that’s what’s going on. I’m still trying to poke holes or find alternative explanations (if you have ideas, I’d be happy to test them out), but it is an explanation that is consistent with the subgoal learning framework. The framework is designed to help those who need procedures broken down into smaller pieces than experts typically recognize is necessary, so it makes sense that it would help students who might otherwise perform poorly in CS1.
Yay Briana’s library for providing us open access license funding!
Decker, A., Margulieux, L. E., Morrison, B. B. (2019). Using the SOLO Taxonomy to understand subgoal labels effect on problem solving processes in CS1. In Proceedings of the Fifteenth Annual Conference on International Computing Education Research. New York, NY: ACM. doi: 10.1145/3291279.3339405
Margulieux, L. E., Morrison, B. B., & Decker, A. (2019). Design and pilot testing of subgoal labeled worked examples for five core concepts in CS1. In ITiCSE ’19: Innovation and Technology in Computer Science Education Proceedings. New York, NY: ACM. doi: 10.1145/3304221.3319756
For more article summaries, visit the article summary series introduction.
2 thoughts on “Conference (ITiCSE and ICER) Preview: Subgoal Labeled Worked Examples in CS1 from Morrison, Decker, & Margulieux”
Pingback: Article Summaries: Series Introduction | Lauren Margulieux
Pingback: I’ve been a computing education researcher for 8 years and just took my first programming course: Here are 5 things I learned | Lauren Margulieux