Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Girouard-Hallam, Lauren Nicole |
| Country | United States |
| Start Date | Jan 01, 2025 |
| End Date | Dec 31, 2026 |
| Duration | 729 days |
| Number of Grantees | 3 |
| Roles | Principal Investigator; Co-Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2404635 |
This award was provided as part of NSF's Social, Behavioral and Economic Sciences (SBE) Postdoctoral Research Fellowships (SPRF) program. The goal of the SPRF program is to prepare promising, early career doctoral-level scientists for scientific careers in academia, industry or private sector, and government. SPRF awards involve two years of training under the sponsorship of established scientists and encourage Postdoctoral Fellows to perform independent research.
NSF seeks to promote the participation of scientists from all segments of the scientific community, including those from underrepresented groups, in its research programs and activities; the postdoctoral period is an important level of professional development in attaining this goal. Each Postdoctoral Fellow must address important scientific questions that advance their respective disciplinary fields.
Under the sponsorship of Dr. Ying Xu and Dr. Susan Gelman at the University of Michigan, this postdoctoral fellowship award supports an early career scientist examining how children’s understanding of generative artificial intelligence (AI) chatbots develops.
As chatbots such as ChatGPT continue to advance and dominate our online informational landscape, it is important to understand how much children trust these tools and whether they can recognize the kinds of errors they commonly make. Children’s decision-making about who to trust -- and when -- matures with age and experience. Given the pervasiveness of generative AI in our technological environment, it is critical to assess children’s ability to identify the kinds of misinformation to which these kinds of chatbots are prone.
Doing so will provide vital information to researchers, parents, and teachers about how to best scaffold children’s early experiences with AI.
The goal of this project is to establish foundational knowledge regarding children’s trust in AI chatbots. This research study project consists of a two-phase mixed-methods approach and educational intervention with 7- to 12- year-old children. Phase 1 will consist of an experiment in which 216 7- to 12-year old children will be asked how much they trust answers to questions about science, provided by either an AI chatbot or a human.
Children will also be asked to identify certain kinds of errors (fabrications, logical inconsistencies, typos) made by human and AI chat partners. Finally, individual differences in children's verbal ability and internet experience will be explored as potential mechanisms contributing to children’s trust decisions and error recognition. In phase 2, this experiment will be conducted in a school setting, and classrooms of 7- to 8-year-old children will engage in a brief educational activity designed to help them recognize errors made by generative AI chatbots.
These activities will consist of semi-structured interviews with children. Findings will advance understanding of children’s trust and error recognition and provide insight into the effectiveness of an educational intervention in bolstering children’s error recognition.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Girouard-Hallam, Lauren Nicole
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant