To explore how physical representations of symbolic relationships affect accuracy in symbolic reasoning.
Symbolic reasoning is a higher order thinking skill that allows us to reason about abstract structures and relationships without relying on a concrete, physical context (e.g., evaluating whether a + b * c + d = d + c * b + a). Because symbolic representations are still physically written on paper, screens, etc., Landy and Goldstone argue that they are subject to biases based on their physical representations, despite differences in physical representations being completely irrelevant to symbolic reasoning. For the equation above, most people would more quickly evaluate whether a + b*c + d = d + c*b + a, even though the space between symbols does not affect their relationship. Another example is data visualization. Despite representing the same data, different graph designs can affect how viewers interpret data.
To explore the impact on performance and affect of explaining and correcting worked examples that include errors compared to practicing problem solving.
Erroneous Examples and Misconceptions
Erroneous examples, or worked-out solutions to an example problem that include at least one incorrect step, have been studied as a way to address misconceptions. Misconceptions can be hard to remedy with direct explanations. Instead, it is often more effective to allow the learners to uncover the logical flaw that disputes a misconception.
For instance, a common misconception in biology is that trees grow from nutrients that they pull from the soil. If an instructor explained that trees grow by breathing in CO^2 from the air, retaining the carbon, and breathing out O^2, a biology student is likely to forget the correct explanation. Instead, if the instructor asks what trees are made out of (carbon), what a tree breathes in (CO^2), and what a tree breathes out (O^2), then the student makes the conclusion that trees grow from carbon in the air and is more likely to remember the correct explanation long term.
I was a guest editor with Briana B. Morrison for a special issue of Computer Science Education on the topic “Advancing Theory about the Novice Programmer.” We had so many high quality submissions that it ended up being a double issue of exciting, current, and theory-driven research. In our guest editorial, we give a 1-paragraph summary of each of the six articles.
- Concepts before Coding: Non-Programming Interactives to Advance Learning of Introductory Programming Concepts in Middle School by Grover, Jackiw and Lundh
- Teaching Computer Programming with PRIMM: A Sociocultural Perspective by Sentance, Waite and Kallia
- Block-based versus Text-based Programming Environments on Novice Student Learning Outcomes: A Meta-Analysis Study by Xu, Ritzhaupt, Tian and Umapathy
- A Theory of Instruction for Introductory Programming Skills by Xie, Loksa, Nelson, Davidson, Dong, Kwik, Tan, Hwa, Li and Ko
- CS1: How Will They Do? How Can We Help? A Decade of Research and Practice by Quille and Bergin
- A Systematic Literature Review of Student Engagement in Software Visualization: A Theoretical Perspective by Al-Sakkaf, Omar and Ahmad
I don’t have permission to reprint the editorial here, but it is available for free on the journal’s website. Continue reading
To describe the foundations of expansive learning, including but not limited to ideas from cultural-historical activity theory (CHAT), summarize 20 years of research using expansive learning as a theoretical framework, and explore future directions and challenges. I will focus on only the first of these objectives.
Theory of Expansive Learning: Classification
Expansive learning is a learning theory for circumstances in which organizations need to break the mold and radically change what they do and how they do it. Learning in this case typically means learning as professionals or members of another type of community, and it does not mean instructing students. Expansive learning spans many dimensions used to classify learning theories.
- Is the learner primarily an individual or member of a community?
- Is the learning primarily a process that transmits culture or transforms culture?
- Is the learning primarily a process of vertical improvement (get better at tasks within a pre-defined set of skills) or horizontal movement (learn tasks outside of disciplinary boundaries and hybridize different cultural contexts)?
- Is the learning primarily a process of acquiring or creating knowledge based on empiricism or of forming new knowledge based on theory?
To summarize the research on the teaching of problem solving–how people apply their knowledge to new situations, reason about scenarios for which they have incomplete or uncertain information, and solve novel problems.
Problem Solving Definitions
Problem solving: a cognitive process that is used to transform a given state into a goal state when a problem does not have an obvious solution, often used interchangeably with thinking and reasoning. Problem solving can be academic, such as solving an unfamiliar arithmetic word problem, or non-academic, such as how get 3/4 of 2/3 of a cup of cottage cheese.
Types of problems (well-defined vs. ill-defined): Well-defined problems have clearly specified given (problem) states, goal (solution) states, and problem-solving spaces (i.e., the relevant information required to solve the problem and the rules/logic/operators that connection different bits of information). For example, an arithmetic problem, no matter how complex, is well-defined. In ill-defined problems, the given state, goal state, or problem-solving space might be unclear. For example, writing an essay or designing a sustainable building are ill-defined problems. The knowledge of the problem solver does not determine whether problems are well- or ill-defined.
To address a problematic “imbalance between the number of quantitative and qualitative articles published in highly ranked research journals by providing guidelines for the design, implementation, and reporting of qualitative research.” They also discuss the risks and benefits of a highly ranked research journal (Computers & Education) recommending guidelines to be used, albeit flexibly, in qualitative research.
Qualitative or Quantitative Methodology and Data
The paper starts by addressing common misconceptions about when it is appropriate to mix-and-match qualitative and quantitative. They define qualitative methodology as hermeneutic or interpretivist and based on a belief in the validity of multiple culturally-defined interpretations of multiple realities. Therefore, qualitative methodology is incompatible with quantitative methodology, which they define as objectivist or empiricist and based on a belief in the validity of one true explanation of one objective reality. Within each of these methodologies, however, data collection methods, instruments, and analysis can be both qualitative (i.e., non-numeric) or quantitative (i.e., numeric) and mixed-and-matched at will. Much more detail about these concepts and their relationships can be found at Twining’s blog post that extends their very useful Table 1.
To examine the factors that make computer supported collaborative learning (CSCL) environments effective. I’ll admit that the authors refer to this paper as a single meta-analysis, but I’d argue they’ve really done three meta-analyses with subsets of the same (large) set of papers. At the very least, I hope the amount of work that the authors put in isn’t the new standard for completing a meta-analysis.
Three research questions for CSCL
The authors chose to examine the following three research questions simultaneously based on the same set of papers because the efficacy of CSCL environments involves multiple, interrelated factors and can be compared to multiple, valid control groups. CSCL researchers manipulate only a subset of these factors in each study to determine the efficacy of specific interventions. While this controlled approach is scientifically sound, it means that a single study cannot compare CSCL environments to a range of possible alternatives. Therefore, the authors simultaneously considered 356 papers that included 425 studies to determine which features of CSCL are more effective compared to which alternatives.
This week in both my personal academic community and in the larger academic community, I saw people speaking publicly about behaviors that offended them. After reading the comments on both instances, I noticed a trend in what was helpful and not helpful. Instead of my usual article summary, I wanted to write a summary of my observations.
This week on academic Twitter, we saw this tweet.
I get this comment about once a semester (or way more if you count undergrads who email me starting with, “Hey Lauren,”), and I always find it offensive. I want to extra-emphasize that this is a comment that strangers make, and it has nothing to do with my personal qualifications. Instead, it reminds me that I belong to an underrepresented group in my profession and that regardless of our qualifications, some people find it strange that young women are professors. Continue reading
Provide a framework for more consistent, qualitative evaluation of student responses to open-ended questions that can be used in many disciplines to determine the degree to which a student has mastered a learning objective.
Part 1: Structure of the Observed Learning Outcome (SOLO) Taxonomy
The SOLO taxonomy was designed based on student responses to open-ended questions in many disciplines. The taxonomy has 3 dimensions:
- Capacity: the pieces of information required to produce the response, ranging from low (i.e., only the information in the question and one relevant piece of information) to high (i.e., the question, multiple pieces of relevant information, interrelations among information, and abstract principles are included in response)
- Relating operations: the relationship between the question and response, ranging from illogical (e.g., tautologies), to question-specific information only (i.e., answers the question without relating to principles or concepts), to information that generalizes beyond the specific question (i.e., relating response to abstract principles and concepts).
- Consistency and closure: the consistency between information provided and the conclusion that the student comes to, ranging from not answering the question, providing inconsistent evidence or jumping to conclusions, to consistent evidence and multiple conclusions based on relevant possible alternatives.
100 free eprints are available at this link. If the free eprints run out, please contact me, firstname.lastname@example.org.
To review the variables that computing education researchers measure and how they measure them. The particular aim of this review was to highlight areas for improving standardization in the field so that we can more easily make comparisons among projects when appropriate. The review favors quantitative data analysis (as standardization is antithetical to the goals of much qualitative data analysis) but considers the important contribution that qualitative data makes.
Measurement versus data
The first section of the paper is a short primer of often misunderstood concepts in measurement. It is intended for only readers who never had formalized measurement training or who want to check their understanding. The section explains common mistakes or questionable data analysis methods the authors have seen while reviewing, like using the split-mean method or difference/gain scores. For the purpose of a summary, I’ll focus on only the most fundamental point–measurement is not always the same as data. A researcher can use a qualitative measurement to create quantitative data, e.g., by asking a students to write programs (qual measurement) and giving them numeric grades (quant data). Similarly, a researcher can measure continuous data, such as a numeric grade from 0-100, and record ordinal-level data, such as a letter grade from A to F. This difference is important because researchers need to consider the data transformations that occur after measurement to use the correct analysis tools/tests and draw valid conclusions.