
- Educational interventions in a variety of contexts have shown that students can learn the strategies professional fact checkers use to evaluate the credibility of online sources. Researchers conducting these interventions have developed new kinds of assessments—instruments that measure participants’ knowledge, behaviors, or cognitive processes—to test the effects of their interventions.
- These new kinds of assessments are necessary because assessments commonly used to measure outcomes in misinformation research offer limited insights into participants’ reasoning. Extant measures do not reveal whether students deploy effective evaluation strategies and do not tap whether students engage in common evaluative mistakes like judging surface-level features (e.g., a source’s top-level domain or appearance).
- In this study, we investigated what new assessments revealed about how students evaluated online sources. Rather than replicate the findings of prior intervention studies, this study focused on understanding what these assessments revealed about students’ reasoning as they evaluated online information.
- The findings showed that the assessments were effective in revealing patterns in students’ reasoning as they evaluated websites. Responses pointed to common challenges students encountered when evaluating online content and showed evidence of students’ effective evaluation strategies.
- This study highlights possibilities for types of assessments that can be both readily implemented and provide insight into students’ thinking. Policymakers could use similar tasks to assess program effectiveness; researchers could utilize them as outcome measures in studies; and teachers could employ them for formative assessment of student learning.