“The arc of AI and this technology is profound. We will look back to the year 2023 and we will talk about it like we did about Sputnik or the moon landing–or birth of the internet. It might be more important than all of those moments.” — Alex Kotran, CEO, The AI Education Project (AiEDU), speaking to state educational agency leaders at the recent “Artificial Intelligence — Opportunities and Risks in Education” webinar series
Education today results from rigorous exposure to live insights across the World, which is inevitably fuelled by technology and Artificial Intelligence. Transcending geographical barriers, while technology connects us to every corner of the world, AI enables this virtual world. The Oxford Learner’s Dictionary defines Artificial Intelligence as the “study and development of computer systems that can copy intelligent human behavior”. While much has been spoken about in the 20th century, AI has set the stage to dominate the 21st century in unimaginable ways.
“While K-12 academic learning influences multiple dimensions of life success, concerns about declining achievement among 9- and 13- year old students in key subjects like math and reading—as reported by the National Assessment of Educational Progress, or NAEP—have led to questions about the long-term challenges facing today’s young people.” In response the recently launched federal student achievement policy agenda has emphasized three ways to accelerate learning. They include:
Increasing student attendance;
Providing high-dosage tutoring; and
Boosting summer learning and afterschool learning time.
Educational interventions in a variety of contexts have shown that students can learn the strategies professional fact checkers use to evaluate the credibility of online sources. Researchers conducting these interventions have developed new kinds of assessments—instruments that measure participants’ knowledge, behaviors, or cognitive processes—to test the effects of their interventions.
These new kinds of assessments are necessary because assessments commonly used to measure outcomes in misinformation research offer limited insights into participants’ reasoning. Extant measures do not reveal whether students deploy effective evaluation strategies and do not tap whether students engage in common evaluative mistakes like judging surface-level features (e.g., a source’s top-level domain or appearance).
In this study, we investigated what new assessments revealed about how students evaluated online sources. Rather than replicate the findings of prior intervention studies, this study focused on understanding what these assessments revealed about students’ reasoning as they evaluated online information.
The findings showed that the assessments were effective in revealing patterns in students’ reasoning as they evaluated websites. Responses pointed to common challenges students encountered when evaluating online content and showed evidence of students’ effective evaluation strategies.
This study highlights possibilities for types of assessments that can be both readily implemented and provide insight into students’ thinking. Policymakers could use similar tasks to assess program effectiveness; researchers could utilize them as outcome measures in studies; and teachers could employ them for formative assessment of student learning.