Motivation
Provide a framework for more consistent, qualitative evaluation of student responses to open-ended questions that can be used in many disciplines to determine the degree to which a student has mastered a learning objective.
Part 1: Structure of the Observed Learning Outcome (SOLO) Taxonomy
The SOLO taxonomy was designed based on student responses to open-ended questions in many disciplines. The taxonomy has 3 dimensions:
- Capacity: the pieces of information required to produce the response, ranging from low (i.e., only the information in the question and one relevant piece of information) to high (i.e., the question, multiple pieces of relevant information, interrelations among information, and abstract principles are included in response)
- Relating operations: the relationship between the question and response, ranging from illogical (e.g., tautologies), to question-specific information only (i.e., answers the question without relating to principles or concepts), to information that generalizes beyond the specific question (i.e., relating response to abstract principles and concepts).
- Consistency and closure: the consistency between information provided and the conclusion that the student comes to, ranging from not answering the question, providing inconsistent evidence or jumping to conclusions, to consistent evidence and multiple conclusions based on relevant possible alternatives.
Using the 3 dimensions, Biggs and Collis defined 5 levels of structural complexity (see image below), which can be used to determine how well students learned an objective. The structural complexity progresses from concrete to abstract, based on a single dimension to multiple dimensions (represented with green, vertical lines in the image), inconsistent to consistent, and using increasing amounts of related knowledge and principles (represented with gray, connecting lines in the image). Based on their analysis of student responses, complexity is typically at the same level across the 3 dimensions. For example, a prestructural response will typically match the prestructural criteria in 1) capacity, 2) relating operations, and 3) consistency and closure. Occasionally a transitional answer will exist between two levels of dimensions.
By Doug Belshaw – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=60807631
Part 2: Applying SOLO to Various Subjects
The academic subjects covered in this section of the book are history, elementary mathematics, English, geography, and modern languages. Biggs and Collis say that these chapters will be most useful to teachers of these subjects.
Part 3: Using SOLO for Teaching and Research
From an instructional design perspective, SOLO requires a list of components related to/required of the task to make judgments about the information, principles, and dimensions that should be included in a response and the relationships among them. This list can be acquired through task analysis.
From an evaluation perspective (for either teaching or research), SOLO was originally designed to be used for qualitative evaluation of student responses using criteria that are consistent regardless of content matter. It can also be considered when when gauging depth of instruction or feedback and when writing questions to prompt high quality responses.
Part 4: Methodological and Theoretical Considerations
I won’t spoil the details provided in the book for the methods and stats nerds, but the general takeaways from the methodological considerations are
- The process of aligning the 3 dimensions to define the 5 levels was based on authentic student responses and rigorous statistics.
- Interrater reliability is generally acceptable when using the taxonomy.
- SOLO ratings correlate higher than teachers’ subjective scores with composite scores of student achievement. In addition, SOLO ratings have higher inter-correlation than teachers’ subjective scores, suggesting SOLO ratings are more reliable than teachers’ subjective scores. It’s unclear how this relates to grading with a rubric.
- SOLO ratings also have convergent and discriminant construct validity with measures of cognitive ability, achievement, motivation, and learning strategies.
It is interesting to note that the taxonomy was originally based on Piagetian developmental stages, and researchers originally expected response complexity levels to increase with age. Instead they found that developmental stages were a limiting factor of complexity (e.g., a 6-year-old cannot produce an extended abstract response), but age or developmental stage was not a predictor of response quality.
Why this is important
I became interested in the SOLO taxonomy because I kept seeing it pop-up as a framework for evaluating student responses (Lister et al., 2006) and constructive instruction (Meerbaum-Salant et al., 2013) and for making comparisons across domains (Brabrand & Dahl, 2007) when reading computing education literature. After reading more about it, I realized how useful it is for multiple domains, and especially for interdisciplinary work that requires evaluating student responses. I particularly like that the levels are based on rigorous analysis, and I plan to use it in my research immediately. In addition, the SOLO taxonomy seems about as mainstream as the Bloom taxonomy, especially because it is a useful framework for instructional design, learning outcome evaluation, and teaching evaluation.
Biggs, J. B., & Collis, K. F. (1982). Evaluation the Quality of Learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). Academic Press.
Brabrand, C., & Dahl, B. (2007, November). Constructive alignment and the SOLO taxonomy: a comparative study of university competences in computer science vs. mathematics. In Proceedings of the Seventh Baltic Sea Conference on Computing Education Research-Volume 88 (pp. 3-17). Australian Computer Society, Inc.
Lister, R., Simon, B., Thompson, E., Whalley, J. L., & Prasad, C. (2006). Not seeing the forest for the trees: novice programmers and the SOLO taxonomy. ACM SIGCSE Bulletin, 38(3), 118-122.
Meerbaum-Salant, O., Armoni, M., & Ben-Ari, M. (2013). Learning computer science concepts with scratch. Computer Science Education, 23(3), 239-264.
For more information about the article summary series or more article summary posts, visit the article summary series introduction.
Pingback: Article Summaries: Series Introduction | Lauren Margulieux
Pingback: Conference (ITiCSE and ICER) Preview: Subgoal Labeled Worked Examples in CS1 from Morrison, Decker, & Margulieux | Lauren Margulieux