Showcasing CHCI’s Research and Innovation at CHI 2025
April 21, 2025

The Center for Human-Computer Interaction (CHCI) has a strong presence at CHI 2025, contributing to many aspects of the conference and receiving paper awards. The ACM CHI conference on Human Factors in Computing Systems is the premier international conference of Human-Computer Interaction. CHI takes place in Yokohama, Japan, at the PACIFICO Yokohama from 26 April to 1 May 2025, while also supporting remote attendance. The conference embraces the theme of Ikigai, a Japanese concept referring to what gives a person a sense of purpose or reason for living.
CHCI members have contributed to seven full papers (including a best paper award and an honorable mention award), five late-breaking work papers, a student research competition entry, and a workshop paper.
List of Full Papers
[Best Paper Award] The Fidelity-based Presence Scale (FPS): Modeling the Effects of Fidelity on Sense of Presence
[Best Paper Honorable Mention Award] ReMirrorFugue: Examining the Emotional Experience of Presence and (Illusory) Communications Across Time
From Knowledge to Practice: Co-Designing Privacy Controls with Children
Investigating the Effects of Simulated Eye Contact in Video-call Job Interviews
What Lies Beneath? Exploring the Impact of Underlying AI Model Updates in AI-Infused Systems
List of Late-Breaking Works
"Look at My Planet!": How Handheld Virtual Reality Shapes Informal Learning Experiences
Understanding the Creation of Human-Virtual Entity Bonds through the AR Mobile Game Peridot
Student Research Competition
Workshop Paper
Paper Presentations
(CHCI members in bold)
The Fidelity-based Presence Scale (FPS): Modeling the Effects of Fidelity on Sense of Presence
[Best Paper Award]
Jacob Belga, Richard Skarbez, Yahya Hmaiti, Eric J Chen, Ryan P. McMahan, Joseph LaViola

Within the virtual reality (VR) research community, there have been several efforts to develop questionnaires with the aim of better understanding the sense of presence. Despite having numerous surveys, the community does not have a questionnaire that informs which components of a VR application contributed to the sense of presence. Furthermore, previous literature notes the absence of consensus on which questionnaire or questions should be used. Therefore, we conducted a Delphi study, engaging presence experts to establish a consensus on the most important presence questions and their respective verbiage. We then conducted a validation study with an exploratory factor analysis (EFA). The efforts between our two studies led to the creation of the Fidelity-based Presence Scale (FPS). With our consensus-driven approach and fidelity-based factoring, we hope the FPS will enable better communication within the research community and yield important future results regarding the relationship between VR system fidelity and presence.
ReMirrorFugue: Examining the Emotional Experience of Presence and (Illusory) Communications Across Time
[Best Paper Honorable Mention Award]
Xiao Xiao, Hayoun Noh, Adrien Lefevre, Lucy Li, Holly McKee, Alaa Algargoosh, Hiroshi Ishii

This paper examines how strategies for simulating social presence across distance can evoke a sense of presence and facilitate illusory interactions across time. We conducted a mixed-methods study with 28 participants, exploring their emotional experience of interacting with decade-old recorded piano performances on MirrorFugue—a player piano enhanced with life-sized projections of the pianist’s hands and body, creating the illusion of a virtual reflection playing the instrument. Data were collected via wearable sensors, questionnaires, and interviews. Results showed that participants felt a strong presence of past pianists, with some experiencing the illusion of two-way communication and an overall increase in connection. The emotional experience was significantly influenced by the participant’s relationship with the recorded pianist and the pianist’s vital status. These findings suggest that telepresence technologies can foster connections with the past, offering spaces for memory recall, self-reflection, and a sense of “time travel.
From Knowledge to Practice: Co-Designing Privacy Controls with Children
Lanjing Liu, Yaxing Yao

Children born in the digital era are facing increasing privacy risks and the need to control privacy in various contexts, suggesting an urgent need to enhance their privacy literacy. While previous research focuses on developing children's privacy literacy by delivering privacy knowledge, it remains unclear how children process the knowledge and apply it in various privacy situations. Furthermore, children's desire for privacy controls also remains understudied. To fill the gap, we conducted two five-day co-design workshops with 11 children (ages 6-11). We uncovered children's sophisticated expectations of everyday privacy management, such as staying aware of their privacy situations, strong authentication methods, and minimal privacy exposure. We further discovered that children translated their privacy knowledge to privacy practices through an iterative reflection and action process. We discussed key considerations to support children's privacy literacy development by leveraging this process and offered implications for children-friendly privacy design.
Investigating the Effects of Simulated Eye Contact in Video-call Job Interviews
Andrew Jelson, Md Tahsin Tausif, Soumya Khanna, Sol Lim, Sang Won Lee

Anecdotally, some communication professionals suggest that directly looking at the camera during a video call can create the impression of making eye contact, fostering rapport and engagement in job interviews or customer interactions. However, there is little evidence that simulating eye contact has a meaningful positive effect on their perception. In this study, we investigated the effects of simulated eye contact in video-call job interviews through an experimental study and a survey. Study 1 involved participants acting as interviewers in a mock interview, where a confederate interviewee simulated eye contact half of the time. We tracked the participants' gaze patterns to assess the effects of simulated eye contact. Study 2 was an online survey designed to validate the findings of Study 1 on a larger scale. Participants with interviewing experience evaluated interviewees based on recorded interview videos, half of which included simulated eye contact. The results of both studies indicate that simulated eye contact had little impact on interview evaluations, contrary to common belief. We discuss how these findings motivate future research and highlight the implications of simulated eye contact.
OSINT Clinic: Co-designing AI-Augmented Collaborative OSINT Investigations for Vulnerability Assessment
Anirban Mukhopadhyay, Kurt Luther

Small businesses need vulnerability assessments to identify and mitigate cyber risks. Cybersecurity clinics provide a solution by offering students hands-on experience while delivering free vulnerability assessments to local organizations. To scale this model, we propose an Open Source Intelligence (OSINT) clinic where students conduct assessments using only publicly available data. We enhance the quality of investigations in the OSINT clinic by addressing technical and collaborative challenges. Over the duration of the 2023-24 academic year, we conducted a three-phase co-design study with six students. Our study identified key challenges in OSINT investigations and explored how generative AI could address these performance gaps. We developed design ideas for effective AI integration based on the use of AI probes and collaboration platform features. A pilot with three small businesses highlighted both the practical benefits of AI in streamlining investigations, and limitations, including privacy concerns and difficulty in monitoring progress.
Reimagining Support: Exploring Autistic Individuals' Visions for AI in Coping with Negative Self-Talk
Buse Carik, Victoria Izaac, Xiaohan Ding, Angela Scarpa, Eugenia Rho
Autistic individuals often experience negative self-talk (NST), leading to increased anxiety and depression. While therapy is recommended, it presents challenges for many autistic individuals. Meanwhile, a growing number are turning to large language Models (LLMs) for mental health support. To understand how autistic individuals perceive AI's role in coping with NST, we surveyed 200 autistic adults and interviewed autism practitioners. We also analyzed LLM responses to participants' hypothetical prompts about their NST. Our findings show that participants view LLMs as useful for managin.g NST by reframing negative thoughts. Both participants and practitioners recognize AI's potential to support therapy and emotional expression. Participants expressed concerns about LLMs' understanding of neurodivergent thought patterns, particularly due to the neurotypical bias of LLMs. Practitioners critiqued LLMs' responses as overly wordy, vague, and overwhelming. This study contributes to the growing body of research on AI-assisted mental health support, with specific insights for supporting the autistic community.
What Lies Beneath? Exploring the Impact of Underlying AI Model Updates in AI-Infused Systems
Vikram Mohanty, Jude Lim, Kurt Luther

AI models are constantly evolving, with new versions released frequently. Human-AI interaction guidelines encourage notifying users about changes in model capabilities, ideally supported by thorough benchmarking. However, as AI systems integrate into domain-specific workflows, exhaustive benchmarking can become impractical, often resulting in silent or minimally communicated updates. This raises critical questions: Can users notice these updates? What cues do they rely on to distinguish between models? How do such changes affect their behavior and task performance? We address these questions through two studies in the context of facial recognition for historical photo identification: an online experiment examining users' ability to detect model updates, followed by a diary study exploring perceptions in a real-world deployment. Our findings highlight challenges in noticing AI model updates, their impact on downstream user behavior and performance, and how they lead users to develop divergent folk theories. Drawing on these insights, we discuss strategies for effectively communicating model updates in AI-infused systems.
Late-Breaking Works
Co-Design Privacy Notice and Controls with Children
Lanjing Liu, Xiaozheng Wang, Shaddi Hasan, Yaxing Yao
Children, as digital natives, face increasing privacy risks and are required to make numerous privacy decisions daily. However, existing privacy notice and controls mainly focus on adult users, remaining challenging for children, who may lack the literacy and developmental maturity to make informed decisions. To empower children to manage their privacy, it is essential to create accessible, comprehensible, and context-appropriate privacy notice and control designs. To fill the gap, we conducted a four-day co-design workshop with five children (ages 8-11). We uncovered children's critical challenges in current privacy notice and control, such as information overload, unclear terminology, and insufficient contextual or causal explanations. The findings reveal children's specific needs and expectations across key dimensions, including modality, timing, channel, and the types and functionality of privacy controls. Based on these insights, we propose design implications to enhance children’s ability to make informed privacy decisions and support their digital autonomy.
"Look at My Planet!": How Handheld Virtual Reality Shapes Informal Learning Experiences
Hayoun Moon, Carlos Augusto Bautista Isaza, Matthew Gallagher, Clara McDaniel, Atlas Vernier, Leah Ican, Karina Springer, Macey Cohn, Sylvia Bennett, Priyanka Nair, Alayna Ricard, Nayha Pochiraju, Daniel Enriquez, Sang Won Lee, Todd Ogle, Phyllis Newbill, Myounghoon Jeon

Handheld virtual reality offers a promising tool for fostering engagement in informal learning environments, providing safe, shared, and inclusive experiences. This study investigated the potential of a handheld VR-based educational program, Solar System Explorer, in a science museum setting. Fifty-three participants, aged 5 to 13, engaged in six interactive scenes using handheld tablets, involving room-scale exploration of virtual environments in small groups guided by a docent. Findings showed that dynamic, room-scale content encouraged active physical movement, while visually rich, interactive scenes fostered knowledge sharing and elicited positive emotional responses. Social engagement was strongest during creative activities, such as planet building, which facilitated interactions even among unfamiliar peers. These insights inform design guidelines for developing fun, active, and collaborative VR learning environments, contributing to scalable and inclusive handheld VR applications for informal education.
SPHERE: Supporting Personalized Feedback at Scale in Programming Classrooms with Structured Review of Generative AI Outputs
Xiaohang Tang, Sam Wong, Marcus Huynh, Zicheng He, Yalong Yang, Yan Chen

This paper introduces SPHERE, a system that enables instructors to effectively create and review personalized feedback for in-class coding activities. Comprehensive personalized feedback is crucial for programming learning. However, providing such feedback in large programming classrooms poses significant challenges for instructors. While Large Language Models (LLMs) offer potential assistance, how to efficiently ensure the quality of LLM-generated feedback remains an open question. SPHERE guides instructors' attention to critical students' issues, empowers them with guided control over LLM-generated feedback, and provides visual scaffolding to facilitate verification of feedback quality. Our between-subject study with 20 participants demonstrates SPHERE's effectiveness in creating more high-quality feedback while not increasing the time spent on the overall review process compared to a baseline system. This work contributes a synergistic approach to scaling personalized feedback in programming education, addressing the challenges of real-time response, issue prioritization, and large-scale personalization.
Understanding the Creation of Human-Virtual Entity Bonds through the AR Mobile Game Peridot
Jixiang Fan, Yusheng Cao, Morva Saaty, Wei-Lu Wang, Lei Xia, Huayi Liu, D. Scott McCrickard
Virtual entities in computing games create bonds with the people who engage with the game. This paper explores these bonds: how they are created, what game features influence them, and how the human-virtual bonds benefit players. This exploration primarily takes place through a diary study of eight people who played the augmented reality mobile game Peridot. Drawn from a pool of self-described game enthusiasts who have knowledge in human-computer interaction methods, the eight people, new to Peridot, played the game for 10 days and wrote daily diary entries about their experiences. Following an in-depth collaboration with the players to reflect on and analyze the diary contents, this paper amalgamates these firsthand gaming experiences with prior research to contribute pragmatic recommendations for improving user engagement and the formation of virtual bonds in gaming environments.
Wearable Meets LLM for Stress Management: A Duoethnographic Study Integrating Wearable-Triggered Stressors and LLM Chatbots for Personalized Interventions
Sameer Neupane, Poorvesh Dongre, Denis Gracanin, Santosh Kumar

We use a duoethnographic approach to study how wearable-integrated LLM chatbots can assist with personalized stress management, addressing the growing need for immediacy and tailored interventions. Two researchers interacted with custom chatbots over 22 days, responding to wearable-detected physiological prompts, recording stressor phrases, and using them to seek tailored interventions from their LLM-powered chatbots. They recorded their experiences in autoethnographic diaries and analyzed them during weekly discussions, focusing on the relevance, clarity, and impact of chatbot-generated interventions. Results showed that even though most events triggered by the wearable were meaningful, only one in five warranted an intervention. It also showed that interventions tailored with brief event descriptions were more effective than generic ones. By examining the intersection of wearables and LLM, this research contributes to developing more effective, user-centric mental health tools for real-time stress relief and behavior change.
Student Research Competition
Boosting Diary Study Outcomes with a Fine-Tuned Large Language Model
Sunggyeol Oh, Jiacheng Zhao, Carson Russo, Michael Bolmer Jr

This study explores fine-tuned Large Language Models (LLMs) integration into diary studies within the Human-Computer Interaction (HCI) field to enhance data collection and analysis. Leveraging a Mistral 7B model fine-tuned with a curated dataset of over 1,000 diary entries, this research addresses challenges such as participant engagement and data richness. The fine-tuned model offers personalized feedback, facilitating deeper reflection and structured recording while reducing the cognitive load on participants. The DiaryQuest educational platform, enhanced with advanced visualization tools and semantic search capabilities, enables educators to efficiently analyze diary data, extract thematic insights, and provide targeted guidance. Results from user evaluations reveal that the optimized platform improves learning outcomes, teaching efficiency, and overall user experience. By bridging traditional diary methodologies with state-of-the-art LLMs, this study advances HCI education and establishes a scalable framework for applying AI in broader educational and research contexts.
Workshop Paper
Tailoring Generative AI to Augment Creative Leadership in Capture-The-Flag Development
Anirban Mukhopadhyay, Kurt Luther

Capture-the-Flag (CTF) competitions are valuable for training in cybersecurity and investigative skills, but their development is time-consuming and requires skilled staff. This paper explores how generative AI can augment creativity and collaboration to streamline CTF development. Using a Research-through-Design (RtD) approach, we develop CTFBot, an AI-powered assistant to support leadership behaviors such as planning, clarifying, monitoring progress, and problem-solving. Grounded in Distributed Leadership (DL) theory, CTFBot enhances collaboration while preserving human creativity. A pilot study reveals challenges in maintaining engagement and providing support through conversational user interfaces. This work offers insights into AI-assisted collaboration for team-based creative tasks.