Motivation: To contribute to the direct instruction vs. discovery learning debate with a meta-analysis that explores the nuances of the literature.
Discovery-based learning: Discovery(-based) learning is one of those terms that is better defined by what it is not (i.e., direct or explicit instruction) than what it is. I found Alfieri et al.’s general definition very helpful, though. They state that “discovery learning occurs whenever the learner is not provided with the target information or conceptual understanding and must find it independently and with only the provided materials,” (p. 2). Others would argue that the definition should be extended to include collaborative learning, especially because it is already pretty broad. Alfieri et al. go on to distinguish between unguided and enhanced discovery learning. They further break down enhanced discovery learning into three subcategories: Continue reading
Motivation: To explore the effect of different levels of guidance on the impact of inquiry-based learning. Lazonder and Harmsen offer a definition of inquiry-based learning, though they stipulate that there is little consensus on what factors define it. They define it as a method “in which students conduct experiments, make observations, or collect information in order to infer the principles underlying a topic or domain” (pp. 682). They emphasize that students act as scientists to achieve these goals.
Inquiry-based learning: Lazonder and Harmsen offer a definition of inquiry-based learning, though they stipulate that there is little consensus on what factors define it. They define it as a method “in which students conduct experiments, make observations, or collect information in order to infer the principles underlying a topic or domain” (pp. 682). They emphasize that students act as scientists to achieve these goals. The article offers a comprehensive review of the seminal and recent work done on inquiry-based learning. Continue reading
Motivation: To develop and validate a quantitative, multiple choice test of computational thinking that can be easily administered, used as both a pre-test and post-test, and used in conjunction with qualitative approaches to gain a holistic understanding of learners’ code-literacy.
Computational Thinking Test (CTt): Román-González first published about the Computational Thinking Test (CTt) in 2015. He started with 40-items that were independent from a programming environment and measured computational thinking (CT) concepts that were identified by a number of people in the field, primarily CSTA & ISTE (2011) and Grover & Pea (2013). After exploring the content validity of the items, CT concepts, and measure overall with 20 experts, he cut the measure down to 28-items on the following concepts: Continue reading
Motivation: To present the history of computational thinking so that researchers do not repeat the mistakes made in the past or resolve problems. To identify potential threats to widespread implementation of CT in K-12 education.
Computational Thinking (CT): Computational thinking (CT) has a long past dating back to nearly the beginning of the computing field. Therefore, the definition of CT has evolved along with computing in general. The current conception of CT is a set of skills related to computing but useful beyond computing. This conception originated from Wing’s (2006) paper that argued CT should be as fundamental to education at reading, writing, and arithmetic. While many people agree with her argument, few agree on what should be included in that set of skills. Continue reading
Motivation: To summarize research that examines providing feedback to students through educational technology and to identify factors that impact its efficacy.
Characteristics of Feedback that Affect Efficacy:
- The most effective type of feedback in general is feedback that explains why an answer is correct or incorrect. If you cannot provide that level of detail, feedback should at least say what the correct answer is rather than only whether the student is right or wrong.
- Feedback is most effective when it’s given throughout a learning task rather than at the end of it, but giving feedback too often can also hurt learning.
- When students are new to a task or working on a particularly hard task, giving feedback through a human avatar can hinder their performance (due to the social facilitation effect).
Motivation: Explore the trade-offs in learning efficacy between completing fewer problems with guidance from a tutored problem solving system compared to seeing more worked problems without guidance.
Tutored problem solving: Computer systems can tutor students who are solving problems by providing them with hints and feedback at each step of the problem solving process. These kinds of systems, such as intelligent tutoring systems, generally improve problem solving performance. Using tutoring systems, however, is time consuming because they require students to attend to each step of the problem in depth, even if the student is not struggling with that step. Continue reading
Motivation: Explore the effects of learners’ belief that feedback is correct on their knowledge and misconception revision.
Knowledge revision: People are notoriously bad at correcting their misconceptions, and it’s not their fault for the same reason that learning is hard. Processing information that doesn’t readily fit into our current knowledge structures is effortful. In addition, we are bombarded every minutes with new information, and our brains have to pick which pieces of information to process and which ones to ignore. For example, if you closed your eyes right now, how much of your visual field could you recall? You can only focus on a few things at a time, forcing your brain to ignore the rest. Continue reading