Article Summary: Román-González et al. (2017) Computational Thinking Test

Motivation: To develop and validate a quantitative, multiple choice test of computational thinking that can be easily administered, used as both a pre-test and post-test, and used in conjunction with qualitative approaches to gain a holistic understanding of learners’ code-literacy.

Computational Thinking Test (CTt): Román-González first published about the Computational Thinking Test (CTt) in 2015. He started with 40-items that were independent from a programming environment and measured computational thinking (CT) concepts that were identified by a number of people in the field, primarily CSTA & ISTE (2011) and Grover & Pea (2013). After exploring the content validity of the items, CT concepts, and measure overall with 20 experts, he cut the measure down to 28-items on the following concepts:

  • Basic directions and sequences (4 items)
  • For loop – repeat times (4 items)
  • While loop – repeat until (4 items)
  • If – simple conditional (4 items)
  • If/else – complex conditional (4 items)
  • While conditional (4 items)
  • Simple functions (4 items)

The measure is intended for children 12-14 (middle school), though it can be used for 5th-10th graders. Completing the measure takes about 45 minutes.

Validation of CTt: This paper focuses on criterion validation of the CTt with other psychometric cognitive ability and problem solving scales. Román-González et al. argue that CT is a problem solving skill and, therefore, should not be entirely independent from other scales used to predict problem solving ability. The Primary Mental Abilities (PMA) battery and RP30 problem solving test served as correlates. The sample size was 1251 students in grades 5-10.

CTt scores highly correlated (r = .67) with scores on RP30, suggesting that both measure somewhat overlapping skills. In addition, CTt scores correlated with spatial ability (r = .44) and reasoning ability (r = .44) in the PMA battery. CTt scores showed a mostly normal distribution and an acceptable reliability (α = .79). Reliability was higher for older students, as is typical as cognitive development and maturity increase. They, interestingly, found that boys performed better than girls on the test, and that the difference between the two increased for later grades. They argue that these differences are likely related to the differences measured between boys and girls on spatial and reasoning skills on the PMA and RP30.

In another paper, Román-González has also explored the Big Five personality factors as potential correlates of the CTt and found score to correlate with Openness, Conscientiousness, and Extraversion (Román-González et al., 2016).

Why this is important: Computer science education and CT need to develop standardized, validated measures for assessing skill across multiple research studies to increase the scientific rigor of their research and take the next step towards evidence-based interventions. CTt is a big step in the right direction. Furthermore, Román-González et al. document the validation process that they are using in their 2015-2017 papers so that others can copy their method. This measure is helping to advance the field and will increase its impact down the road.

 

CSTA & ISTE (2011). Operational Definition of Computational Thinking for K–12 Education. Retrieved from http://csta.acm.org/Curriculum/sub/CurrFiles/CompThinkingFlyer.pdf

Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field. Educational Researcher42(1), 38-43.

Román-González, M. (2015). Computational thinking test: Design guidelines and content validation. In Proceedings of EDULEARN15 Conference, 2436-2444. doi: 10.13140/RG.2.1.4203.4329

Román-González, M., Pérez-González, J. C., Moreno-León, J., & Robles, G. (2016). Does computational thinking correlate with personality?: the non-cognitive side of computational thinking. In Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality (pp. 51-58). ACM.

Román-González, M., Pérez-González, J.-C., & Jiménez-Fernández (2017). Which cognitive abilities underlie computational thinking? Criterion validity of the Computational Thinking Test. Computers in Human Behvavior, 72, 678-691. https://doi.org/10.1016/j.chb.2016.08.047

For more information about the article summary series or more article summary posts, visit the article summary series introduction.

One thought on “Article Summary: Román-González et al. (2017) Computational Thinking Test

  1. Pingback: Article Summary: Series Introduction | Lauren Margulieux

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s