Choice developing edition item multiple test third validating Old young adult chat line

Critically, a lot of the research, whether correlational or experimental, has presented participants with its initial form—an open-ended answer format—so that participants have had to construct their responses (e.g., De Neys, Rossi, & Houde, ).

In the latter approach, the equivalence between the open-ended and multiple-choice versions of the test has been implicitly assumed or explicitly claimed and similar processes have been inferred from these two types of tests.

Another consequence could be that such a test would have a weaker association with numeracy than the open-ended CRT does (Liberali et al., ).

In other words, construct nonequivalence would implicate different cognitive processes taking place in the different formats of the CRT.

We found no significant differences between the three response formats in the numbers of correct responses, the numbers of intuitive responses (with the exception of the two-option version, which had a higher number than the other tests), and the correlational patterns of the indicators of predictive and construct validity.

All three test versions were similarly reliable, but the multiple-choice formats were completed more quickly.

Specifically, we set up three main aims to fill in the gaps outlined above.

First, we tested whether the CRT response format affects performance in the test, both in terms of reflectiveness score (i.e., correct responses) and intuitiveness score (i.e., appealing but incorrect responses).

In addition, even if people were equally likely to detect a conflict in the two different response formats and engage in reflective thinking afterward, they might still fail to correct their initial intuition due to lack of mathematical knowledge (Pennycook, Fugelsang, & Koehler, ).Indeed, if such equivalence has been achieved then using a validated multiple-choice version of the CRT would be more convenient since such a version would most likely be quicker to administer and code than the open-ended CRT.Furthermore, an automatic coding scheme would eliminate any potential coding ambivalence—for instance, whether “0.05 cents,” a formally incorrect answer to a bat and ball problem, should count as an incorrect or correct answer, on the basis of the assumption that participants mistook the unit in the answer for dollars instead of cents: that is, “0.05 dollars.”There are several good empirical and theoretical reasons to expect differences according to the response formats of the Cognitive Reflection Test.Cognitive conflict, which triggers deeper cognitive processing—according to several dual-process theories (De Neys, ).Thus, a multiple-choice version of the CRT might be easier because the explicit options would trigger cognitive conflict with higher likelihood, which, in turn, would lead to easier engagement with cognitive reflection and thus become more strongly associated with the benchmark variables usually linked with the cognitive reflection the CRT is assumed to measure (e.g., belief bias, paranormal beliefs, denominator neglect, and actively open-minded thinking; Pennycook et al., ).

Leave a Reply