Projects


AI as Human?: Validating Large Language Models for Persuasion Simulation

As researchers and practitioners increasingly consider using artificial intelligence to model human behavior, a fundamental question emerges: while AI can effectively persuade humans, can it understand human information processing well enough to accurately simulate persuasion processes? My dissertation research addresses this question by developing the first systematic framework for evaluating whether large language models (LLMs) can replicate human responses in dynamic persuasive conversations. 

Using controlled experiments with over 1,148 participant profiles and over 9,000 individual AI simulations, I evaluate AI’s capacity to model human responses to persuasive messages and counter-arguments, enabling researchers and practitioners to test campaign strategies at scale, anticipate audience reactions, and identify ethical boundaries in the use of AI-driven persuasion. Results demonstrate that while LLMs can approximate human belief change and conversational patterns, they systematically underestimate belief change variability and exhibit demographic biases.

This research provides critical safeguards for AI deployment in communication research while enabling responsible use for theory development and intervention testing. For strategic communication professionals, these findings create opportunities to prototype campaign messages at scale and test persuasive strategies across diverse audience segments before human deployment, while understanding the limitations of such approaches. The systematic differences between AI and human responses advance theoretical understanding of both persuasion mechanisms and the fundamental distinctions between human and machine information processing.

I am extending this AI simulation framework to additional contexts, including vaccine persuasion, donation campaigns, and crisis communication scenarios. 

Psychological Profiles of AI Users: A Behavioral Data Approach

Despite widespread discourse about artificial intelligence transforming society, surprisingly little objective data exists on how people actually use AI tools in everyday life. In this research, I examine real-world AI adoption patterns and their psychological correlates using large-scale behavioral data. 
By collecting and analyzing up to 90 days of web-browsing data from 954 participants across two studies (university students and general public samples), I quantified actual AI engagement across 14 million websites. The findings reveal that AI use is substantially lower than public discourse suggests—comprising only 1% of student web browsing and 0.44% of general public browsing on average. Notably, self-reported AI use correlates only moderately with actual use (ρ = .329), highlighting critical limitations in subjective technology use measures. 
The research identifies personality-based patterns of AI receptivity: aversive personality characteristics (Machiavellianism, narcissism, and psychopathy) consistently predict adoption patterns, while demographics show little relationship to usage. These findings enable more precise targeting in communication campaigns and inform strategic decisions about AI integration. Temporal analysis of browsing patterns suggests AI serves primarily as an instrumental tool in educational and professional workflows rather than for recreational purposes. 
This work, published in Cyberpsychology, Behavior, and Social Networking, establishes important behavioral benchmarks for understanding technology adoption and demonstrates the value of combining passive digital trace data with psychological assessment, moving beyond self-report limitations to reveal how individual differences shape real-world technology use.

Future work will examine what users actually do during AI sessions and how usage content relates to individual characteristics. I am particularly interested in investigating downstream consequences of AI use—including impacts on academic integrity, information-seeking behaviors, and work performance—to understand how actual AI adoption patterns influence meaningful real-world outcomes. 

HPV Misbeliefs 

Human papillomavirus (HPV) causes cervical cancer and remains one of the most common sexually transmitted infections globally. Despite effective vaccines, vaccination rates remain suboptimal worldwide, with parental beliefs significantly influencing HPV vaccination decisions. This research line presents the first comprehensive global taxonomy of parental HPV vaccination beliefs, providing health communicators with an evidence-based framework for developing targeted interventions.

I developed a comprehensive global taxonomy of parental HPV vaccination beliefs by analyzing 519 studies using AI-enhanced literature analysis, identifying 24 core belief categories accounting for 80% of documented concerns across international populations. I demonstrate how LLMs can scale literature synthesis while revealing consistent global patterns alongside important regional differences that inform strategic communication and targeted intervention design. 

I am currently preparing a survey study to gather additional insights on the misbeliefs of parents living in rural areas throughout the US. Findings from this survey study, as well as my literature-derived taxonomy, will be used to design an AI-powered chatbot that provides personalized conversations addressing parental concerns in rural health clinics where providers may lack time for extensive one-on-one discussions