Intro to the Learning Sciences: Research Methods for Experimental and Ecological Validity

In the learning sciences, or any research endeavor, the research methods used are of paramount importance. Research methods and design are critical to the quality and validity of the knowledge produced. We need to use methods that represent the complete learning environment, or we risk making incorrect explanations for our findings, and that represent it accurately, or we risk not measuring what we think we’re measuring. This topic is so important that I wrote a whole blog series about research design. This post will expand upon that series with conventions specific to the learning sciences.

Validity: Tradeoffs in Experimental Control

An important dimension of research methods and design is the amount of experimental control, or how much influence the researchers have in the learning environment. This dimension ranges from high experimental control, with randomized control trials, to low experimental control, with non-experimental/observational studies (see my post on Non-Experimental and Experimental Designs for more information). Randomized control trials are often hailed as the gold standard of research because they randomly assign participants to receive a pre-made intervention or not. However, because so much of the learning environment is controlled by the researcher, they need a sufficient sample size to reasonably capture the variability that learners represent. Much more realistic in education research, and often more valid for studying learners in authentic learning environments, are quasi-experimental trials, which typically use existing classes to create different groups of learners and test pre-made interventions.

The learning sciences is particularly fond of design experiments, in which researchers work within the context of an existing learning environment, like a class or camp, to iteratively design an intervention. In addition to being more realistic, like quasi-experimental trials, they are also more sustainable because they are developed based on the needs of learners and educators. Lowest in experimental control are non-experimental/observational studies. These studies do not use an intervention and instead measure learning environments as they are, which is great for ecological validity but does not support establishing a cause-and-effect relationship. None of these tools is right or wrong, and each has pros and cons. For this reason, learning scientists often use different types of studies in different stages of research.

From Observation to Theory: Stages of Learning Sciences Research

Learning sciences research often begins with observational studies that aim to understand the current learning environment—how students interact with content, tools, peers, and instructors. These studies help researchers identify challenges, opportunities, and patterns in real-world settings. This foundational stage is critical for grounding future interventions in the realities of learners’ experiences. Next, researchers move into design experiments, where they create and test initial versions of a learning intervention, such as a new curriculum, tool, or instructional strategy. These design experiments are meant to be flexible and iterative. As researchers gather data on how the intervention works in practice, they refine and re-test it, often through multiple cycles. Once the design is stable and shows promise, researchers may conduct quasi-experimental or random control trials in a variety of contexts to test its effectiveness in various learning environments to examine generalizability and contribute to theory building. This progression—from understanding the context, to designing and refining interventions, to testing them across settings—reflects the learning sciences’ commitment to both authenticity and scientific rigor. For more about these stages in the context of computing education, see the Design-Based Research section our chapter Learning Sciences for Computing Education (open access).

What to Measure in Learning Sciences Research

Choosing what to measure is a critical part of understanding learning. The goal is to capture evidence that helps explain how and why learning happens, and whether an intervention is effective. Researchers typically use a combination of established and context-specific measures to build a rich picture of the learning process. Here are some common categories of data to consider:

  • Conventional Measures: Use widely accepted measures in your area of study to allow for comparison across studies (e.g., concept inventories, commonly used rubrics).
  • Validated Instruments: Whenever possible, use surveys, assessments, or observation protocols that have been validated for reliability and accuracy in similar contexts.
  • Learner Characteristics: Collect basic demographic and background information (e.g., age, prior experience, language background) to understand who your learners are, how these factors might influence learning, and to which populations your results might generalize.
  • Pre-Test Measures: Assess learners’ prior knowledge or skills before the intervention to establish a baseline and support claims about learning. I’ve learned that this is important even when I don’t expect learners to have prior knowledge (in this case, the pre-test can be quite short).
  • Process Data: Gather data about what happens during learning (e.g., time on task, clickstream data, student questions) to understand how outcomes are achieved.
  • Product Data: Collect the outcomes of learning activities (e.g., quizzes, interviews, project artifacts) to evaluate the end result of learning.
  • Fidelity of Implementation: If you are not the person(s) giving instruction, support, etc., measure how closely the intervention was implemented as intended (e.g., were activities skipped?).

For more information about what to measure and types of data, see my posts on Independent and Dependent Variables, Types of Data: Qualitative, Quantitative, and Mixed, Levels of Measurement, and Survey Design, Demographics, Validity, and Reliability.

Secret Tip (used by the best researchers)

A valuable habit in developing research methods is to draft the limitations section for your study before you even begin collecting data. Imagine that your findings don’t turn out as expected, and ask yourself why. What might have gone wrong? Did you miss a key variable? Was the intervention implemented inconsistently? Were your measures too narrow or not sensitive enough? By thinking through these possibilities early, you can make more intentional design choices. You might decide to add a measure that captures a potential confounding factor, improve the fidelity of implementation, or broaden your data collection to include both process and product data. Pre-writing the limitations section helps you anticipate weaknesses and proactively strengthen your study, rather than explaining them away after the fact. I describe more about this process in my post Improve Your Design Before Collecting Data.

To view more posts about learning sciences, see a list of topics on the Intro to the Learning Sciences: Series Introduction.

One thought on “Intro to the Learning Sciences: Research Methods for Experimental and Ecological Validity

  1. Pingback: Intro to the Learning Sciences: Series Introduction | Lauren Margulieux

Leave a comment