The Center for Human-Computer Interaction at Virginia Tech contributes to CHI 2026
April 8, 2026
The Center for Human-Computer Interaction at Virginia Tech (CHCI) will be well represented at the annual ACM CHI conference on Human Factors in Computing Systems, with faculty affiliates and student members contributing papers, posters, and workshop sessions. This year’s conference will take place April 13–17, 2026, in Barcelona, Spain, at the Centre de Convencions Internacional de Barcelona.
CHCI faculty and student researchers authored 18 full papers, including one recognized with an Honorable Mention Award, as well as one workshop, seven workshop papers, and five poster presentations. Accepted topics included human-AI collaboration and trust; AI in education and skill development; bias, ethics, and vulnerable populations; immersive and extended reality systems; and AI for sensemaking and decision support. Poster and workshop topics in HCI research explore the topics of trust, consent, creativity, accessibility, and empathy.
Conference roles
Associate Chairs, Blending interaction: Engineering interactive systems & tools subcommittee, Sang Won Lee and Yan Chen
Associate Chair, Computational interaction subcommittee: Yan Chen
Associate Chair, Critical computing, sustainability, and social justice subcommittee, Ihudiya Finda Williams
Associate Chair, Understanding people — statistical and quantitative methods, Eugenia H. Rho
Workshop Organizer, Herding CATs: Making sense of creative activity traces, Sang Won Lee
Scholarly contributions
Full papers
An empirical study to understand how students use ChatGPT for writing essays
CodeStream: Augmenting timelines with code annotation for navigating large coding histories
Effects of virtual reality system fidelity on presence using the fidelity-based presence scale
LingoQ: Bridging the gap between EFL learning and work through AI-generated work-related quizzes
Power echoes: Investigating moderation biases in online power-asymmetric conflicts
PuppetChat: Fostering intimate communication through bidirectional actions and micronarratives
The influence of distributed AI in trust and collaboration for search-and-rescue teams
Understanding digital religion in the lives of Black Christian young adults
When hands meet physics in virtual reality: Effects of interaction fidelity on user experience
Workshops
Workshop papers
AI-augmented human decision-making in secure space operations
TUNING: Adaptive GUI modification as agent tools in task-oriented chatbots
Visceral notices: Rethinking consent for passive sensing in augmented reality
Poster presentations
Human-AI Interaction in IXR: Design considerations from experts
Reframing ambiguity as discovery: Non-punitive feedback in tangible vocabulary learning
Unpacking empathy development in HCI learners: Patterns from diary reflections and peer discussions
Details of full papers
[Honorable Mention award winner] CHOIR: A chatbot-mediated organizational memory leveraging communication in university research labs
Sangwook Lee, Adnan Abbas, Yan Chen, Young-Ho Kim, Sang Won Lee
University research labs often rely on chat-based platforms for communication and project management, where valuable knowledge surfaces but is easily lost in message streams. Documentation can preserve knowledge, but it requires ongoing maintenance and is challenging to navigate. Drawing on formative interviews that revealed organizational memory challenges in labs, we designed CHOIR, an LLM-based chatbot that supports organizational memory through four key functions: document-grounded Q&A, Q&A sharing for follow-up discussion, knowledge extraction from conversations, and AI-assisted document updates. We deployed CHOIR in four research labs for one month (n=21), where the lab members asked 107 questions and lab directors updated documents 38 times in the organizational memory. Our findings reveal a privacy-awareness tension: questions were asked privately, limiting directors’ visibility into documentation gaps. Students often avoided contribution due to challenges in generalizing personal experiences into universal documentation. We contribute design implications for privacy-preserving awareness and supporting context-specific knowledge documentation.
An empirical study to understand how students use ChatGPT for writing essays
Andrew Jelson, Daniel Manesh, Alice Jang, Daniel Dunlap, Young-Ho Kim, Sang Won Lee
As large language models (LLMs) become widespread, students increasingly turn to systems like ChatGPT for writing tasks. Educators worry that this reliance may reduce critical engagement with writing and hinder students' learning processes. Although datasets exist on students’ use of LLMs for writing, how they functionally use ChatGPT in detail — and how this usage shapes their writing and perceptions — remains underexplored. We conducted an online study (n=77) in which students wrote an essay using an in-house ChatGPT we developed to capture their queries. Through qualitative analysis, we identified the types of assistance students sought and presented patterns of use, ranging from asking for opinions on a topic to delegating the entire writing task to ChatGPT. We also found that students' writing self-efficacy influenced their querying patterns and that levels of ownership and creativity varied depending on how they used ChatGPT. This study contributes empirical data to ongoing discussions about how writing education should incorporate or regulate LLM-powered tools.
"Are we writing an advice column for Spock here?" Understanding stereotypes in AI advice for autistic users
Caleb Wohn, Buse Carik, Xiaohan Ding, Sang Won Lee, Young-Ho Kim, Eugenia Rho
Autistic individuals sometimes disclose autism when asking LLMs for social advice, hoping for more personalized responses. However, they also recognize that these systems may reproduce stereotypes, raising uncertainty about the risks and benefits of disclosure. We conducted a mixed-methods study combining a large-scale LLM audit experiment with interviews involving 11 autistic participants. We developed a six-step pipeline operationalizing 12 documented autism stereotypes into decision-making scenarios framed as users requesting advice (e.g., "Should I do A or B?"). We generated 345,000 responses from six LLMs and measured how advice shifted when prompts disclosed autism versus when they did not. When autism was disclosed, LLMs disproportionately recommended avoiding stereotypically stressful situations, including social events, confrontations, new experiences, and romantic relationships. While some participants viewed this as affirming, others criticized it as infantilizing or undermining opportunities for growth. Our study illuminates how the intermingling of affirmation and stereotyping complicates the personalization of LLMs.
CodeStream: Augmenting timelines with code annotation for navigating large coding histories
Ashley Ge Zhang, Yan-Ru Jhou, Yinuo Yang, Shamita Rao, Maryam Arab, Yan Chen, Steve Oney
Code edit histories can offer instructors valuable insight into students’ problem-solving processes, revealing unproductive behaviors that final code alone cannot capture. For example, a correct solution may contain large copy-and-pasted segments (suggesting the code originated elsewhere) or unguided trial-and-error (suggesting a lack of clear strategy). Timelines are a common way to visualize code histories, but existing timeline visualizations of code or document histories show only when and where edits occurred, not what changed. Without this context, it is difficult to answer key questions about how students invested effort or to infer their intentions. We present CodeStream, a visualization system that augments timelines with situational code annotations, whose granularity and visibility dynamically adapt to scale and interaction state. A comparison study shows that CodeStream enables context-aware navigation of coding histories, supporting fast and accurate pattern identification, and helping instructors reason about students’ coding behaviors and identify who may need intervention.
Does personalized nudging wear off? A longitudinal study of AI self-modeling for behavioral engagement
Qing He, Zeyu Wang, Yuzhou Du, Jiahuan Ding, Yuanchun Shi, Yuntao Wang
Sustaining the effectiveness of behavior change technologies remains a key challenge. AI self-modeling, which generates personalized portrayals of one's ideal self, has shown promise for motivating behavior change, yet prior work largely examines short-term effects. We present one of the first longitudinal evaluations of AI self-modeling in fitness engagement through a two-stage empirical study. A 1-week, three-arm experiment (visual self-modeling (VSM), auditory self-modeling (ASM), Control; N=28) revealed that VSM drove initial performance gains, while ASM showed no significant effects. A subsequent 4-week study (VSM vs. Control; N=31) demonstrated that VSM sustained higher performance levels but exhibited diminishing improvement rates after two weeks. Interviews uncovered a catalyst effect that fostered early motivation through clear, attainable goals, followed by habituation and internalization which stabilized performance. These findings highlight the temporal dynamics of personalized nudging and inform the design of behavior change technologies for long-term engagement.
Effects of virtual reality system fidelity on presence using the fidelity-based presence scale
Jacob Belga, Joseph LaViola, Ryan P. McMahan
Numerous studies have investigated the effects of system fidelity as a whole on one’s total sense of presence in virtual reality (VR). The Fidelity-based Presence Scale (FPS), a recently introduced presence questionnaire, provides a method for investigating the effects of different system fidelities (interaction, scenario, and display) on different aspects of one’s sense of presence. In this paper, we present one of the first studies to investigate those effects for a locomotion task by conducting a 2 × 2 × 2 within-subjects experiment that reveals insight on how the components of system fidelity affect sense of presence. Like recent research, our results indicate that interaction fidelity and display fidelity significantly affect one’s interaction presence and display presence, respectively. However, unlike prior work, we did not find that changes in scenario fidelity significantly affected one’s scenario presence. We discuss other results and the possible implications of this research.
Exploring the impact of proactive generative AI agent roles in time-sensitive collaborative problem-solving tasks
Anirban Mukhopadhyay, Kevin Salubre, Hifza Javed, Shashank Mehrotra, Kumar Akash
Collaborative problem-solving under time pressure is common but difficult, as teams must generate ideas quickly, coordinate actions, and track progress. Generative AI offers new opportunities to assist, but we know little about how proactive agents affect the dynamics of real-time, co-located teamwork. We studied two forms of proactive support in digital escape rooms: a facilitator agent that offered summaries and progress cues, and a peer agent that proposed ideas and answered queries. In a within-subjects study with 24 participants, we compared group performance and processes across three conditions: No AI, Peer, and Facilitator. Results show that the peer agent occasionally enhanced problem-solving by offering timely hints and memory support, though it also disrupted flow and created over-reliance. In comparison, the facilitator agent provided light scaffolding but had a limited impact on outcomes. We provide design considerations for proactive generative AI agents based on our findings.
From vulnerable to resilient: Examining parent and teen perceptions on how to respond to unwanted cybergrooming advances
Xinyi Zhang, Mamtaj Akter, Heajun An, Minqian Liu, Qi Zhang, Lifu Huang, Jin-Hee Cho, Pamela J. Wisniewski, Sang Won Lee
Cybergrooming is a form of online abuse that threatens teens' mental health and physical safety. Yet, most prior work has focused on detecting perpetrators’ behaviors, leaving a limited understanding of how teens might respond to such unwanted advances. To address this gap, we conducted an online survey with 74 participants — 51 parents and 23 teens — who responded to simulated cybergrooming scenarios in two ways: responses that they think would make teens more vulnerable or resilient to unwanted sexual advances. Through a mixed-methods analysis, we identified four types of vulnerable responses (encouraging escalation, accepting an advance, displaying vulnerability, and negating risk concern) and four types of protective strategies (setting boundaries, directly declining, signaling risk awareness, and leveraging avoidance techniques). As the cybergrooming risk escalated, both vulnerable responses and protective strategies showed a corresponding progression. This study contributes a teen-centered understanding of cybergrooming, a labeled dataset, and a stage-based taxonomy of \revision{perceived} protective strategies, while offering implications for educational programs and sociotechnical interventions.
"Having lunch now": Understanding how users engage with a proactive agent for daily planning and self-reflection
Adnan Abbas, Caleb Wohn, Arnav Jagtap, Eugenia Rho, Young-Ho Kim, Sang Won Lee
Conversational agents have been studied as tools to scaffold planning and self-reflection for productivity and well-being. While prior work has demonstrated positive outcomes, we still lack a clear understanding of what drives these results and how users behave and communicate with agents that act as coaches rather than assistants. Such understanding is critical for designing interactions in which agents foster meaningful behavioral change.
We conducted a 14-day longitudinal study with 12 participants using a proactive agent that initiated regular check-ins to support daily planning and reflection. Our findings reveal diverse interaction patterns: participants accepted or negotiated suggestions, developed shared mental models, reported progress, and at times resisted or disengaged. We also identified problematic aspects of the agent's behavior, including rigidity, premature turn-taking, and overpromising. Our work contributes to understanding how people interact with a proactive, coach-like agent and offers design considerations for facilitating effective behavioral change.
Human-human-AI triadic programming: Uncovering the role of AI agent and the value of human partner in collaborative learning
Taufiq Daryanto, Xiaohan Ding, Kaike Ping, Lance T. Wilhelm, Yan Chen, Chris Brown, Eugenia H. Rho
As AI assistance becomes embedded in programming practice, researchers have increasingly examined how these systems help learners generate code and work more efficiently. However, these studies often position AI as a replacement for human collaboration and overlook the social and learning-oriented aspects that emerge in collaborative programming. Our work introduces human-human-AI (HHAI) triadic programming, where an AI agent serves as an additional collaborator rather than a substitute for a human partner. Through a within-subjects study with 20 participants, we show that triadic collaboration enhances collaborative learning and social presence compared to the dyadic human-AI (HAI) baseline. In the triadic HHAI conditions, participants relied significantly less on AI-generated code in their work. This effect was strongest in the HHAI-shared condition, where participants had an increased sense of responsibility to understand AI suggestions before applying them. These findings demonstrate how triadic settings activate socially shared regulation of learning by making AI use visible and accountable to a human peer, suggesting that AI systems that augment rather than automate peer collaboration can better preserve the learning processes that collaborative programming relies on.
LingoQ: Bridging the gap between EFL learning and work through AI-generated work-related quizzes
Yeonsun Yang, Sang Won Lee, Jean Y. Song, Sangdoo Yun, Young-Ho Kim
Non-native English speakers performing English-related tasks at work struggle to sustain EFL learning, despite their motivation.Often, study materials are disconnected from their work context. Our formative study revealed that reviewing work-related English becomes burdensome with current systems, especially after work. Although workers rely on LLM-based assistants to address their immediate needs, these interactions may not directly contribute to their English skills. We present LingoQ, an AI-mediated system that allows workers to practice English using quizzes generated from their LLM queries during work. LingoQ leverages these on-the-fly queries using AI to generate personalized quizzes that workers can review and practice on their smartphones. We conducted a three-week deployment study with 28 EFL workers to evaluate LingoQ. Participants valued the quality-assured, work-situated quizzes and constantly engaging with the app during the study. This active engagement improved self-efficacy and led to learning gains for beginners and, potentially, for intermediate learners. Drawing on these results, we discuss design implications for leveraging workers’ growing reliance on LLMs to foster proficiency and engagement while respecting work boundaries and ethics.
Power echoes: Investigating moderation biases in online power-asymmetric conflicts
Yaqiong Li, Peng Zhang, Peixu Hou, Kainan Tu, Guangping Zhang, Shan Qu, Wenshi Chen, Yan Chen, Ning Gu, Tun Lu
Online power-asymmetric conflicts are prevalent, and most platforms rely on human moderators to conduct moderation currently. Previous studies have been continuously focusing on investigating human moderation biases in different scenarios, while moderation biases under power-asymmetric conflicts remain unexplored. Therefore, we aim to investigate the types of power-related biases human moderators exhibit in power-asymmetric conflict moderation (RQ1) and further explore the influence of AI's suggestions on these biases (RQ2). For this goal, we conducted a mixed design experiment with 50 participants by leveraging the real conflicts between consumers and merchants as a scenario. Results suggest several biases towards supporting the powerful party within these two moderation modes. AI assistance alleviates most biases of human moderation, but also amplifies a few. Based on these results, we propose several insights into future research on human moderation and human-AI collaborative moderation systems for power-asymmetric conflicts.
PuppetChat: Fostering intimate communication through bidirectional actions and micronarratives
Emma Jiren Wang, Siying Hu, Zhicong Lu
As a primary channel for sustaining modern intimate relationships, instant messaging facilitates frequent connection across distances. However, today’s tools often dilute care; they favor single tap reactions and vague emojis that do not support two way action responses, do not preserve the feeling that the exchange keeps going without breaking, and are weakly tied to who we are and what we share. To address this challenge, we present PuppetChat, a dyadic messaging prototype that restores this expressive depth through embodied interaction. PuppetChat uses a reciprocity aware recommender to encourage responsive actions and generates personalized micronarratives from user stories to ground interactions in personal history. Our 10-day field study with 11 dyads of close partners or friends revealed that this approach enhanced social presence, supported more expressive self disclosure, and sustained continuity and shared memories.
The influence of distributed AI in trust and collaboration for search-and-rescue teams
Matthew Wilchek, Sally Dickinson, Kurt Luther, Feras A. Batarseh
Artificial intelligence (AI) is increasingly deployed in high-stakes domains such as search-and-rescue (SAR), where detections or classifications can shape how teams share information, build trust, and make time-critical decisions. This paper investigates how teams of SAR professionals incorporate AI into their teamwork, highlighting both benefits and challenges. To support this study, we developed the Council of Wizards, a multi-agent Wizard-of-Oz technique that simulates distributed AI systems, enabling scalable and controlled evaluation of collaborative dynamics. Using this novel method, we conducted an experiment with 24 subject-matter experts (SMEs) who reviewed SAR video footage as small teams and made group decisions, with or without AI support. Quantitative results showed that AI-assisted teams reached consensus faster than controls. Qualitative feedback revealed how participants interpreted trust cues, adapted strategies, and sometimes struggled with overload or conflicting detections. Findings illustrate how AI shapes teamwork in SAR and provide design implications for trustworthy distributed human-AI interactions.
The wetland quest: Fostering empathy and literacy for urban herpetofauna through VR wetland exploration
Lei Xia, Xiaomei Li, Jixiang Fan, Dan Li, Ling Fan
This paper investigates how virtual reality (VR) can foster empathy and ecological literacy for urban herpetofauna — reptiles and amphibians often overlooked in conservation. We present The Wetland Quest (TWQ), an immersive VR experience set in a Shanghai wetland that employs embodiment and scale-shift mechanics to situate users in the world of local species. In a mixed-methods study with 62 participants, TWQ significantly improved species literacy and attitudes toward herpetofauna, supported by large quantitative gains and qualitative themes of immersion, empathy, and reduced aversion. This work contributes to HCI and environmental communication by: (1) introducing TWQ as a design case of VR for underrepresented species; (2) providing empirical evidence that immersive perspective-taking can enhance literacy and pro-environmental attitudes; and (3) demonstrating a methodological protocol that combines knowledge tests, validated attitude scales, observations, and interviews, offering a transferable approach for future VR conservation research.
Understanding digital religion in the lives of Black Christian young adults
Alexa N. Smith, Nissi Otoo, Rohin Beach, Jessie Eaves, Kaiya Jennings, George Lyons, Dean Thompson, Teresa K. O'Leary, Terika McCall, Ihudiya Finda Ogbonnaya-Ogburu
Christian communities are increasingly using digital tools to engage their members. However, many young adults are moving away from traditional religious affiliations. This trend is notable among young adult Black Americans, who historically have maintained stronger religious identities than other racial groups. Given these converging trends of strong religious identity, increasing technology use, and the decline in traditional affiliation, we conducted an online survey and semi-structured interviews with Black Christians from 18 to 25 to understand their techno-spiritual practices. We found that while many participants used technology for Bible study and worship, most still valued non-digital aspects of spiritual practice; when watching live-streamed worship, most participants did not actively engage online. Finally, we observed a growing interest in the use of generative AI for spiritual guidance and study. Our findings provide insights in understanding techno-spirituality and spiritual practices for a marginalized young adult population in the United States.
When hands meet physics in virtual reality: Effects of interaction fidelity on user experience
Christos Lougiakis, Doug Bowman, Akrivi Katifori, Maria Roussou
Physics governs everyday interaction, yet in Virtual Reality (VR) the fidelity of such interactions can diverge from reality. We investigate how Physical Fidelity (virtual object behavior) and Action Fidelity (virtual hand behavior) of physics-driven interaction shape user experience. In a within-subject study (n = 34), participants performed gamified tasks under three conditions: No-Physics (lower Physical and Action Fidelity), Object-Physics (higher Physical, lower Action Fidelity), and Full-Physics (higher Physical and Action Fidelity). Results show that higher Physical Fidelity reduces task efficiency and increases overall workload, with the No-Physics condition outperforming the others in these metrics. When combined with higher Action Fidelity, although efficiency gets even worse in some cases, the Full-Physics condition enhances body ownership and interaction quality. The hybrid Object-Physics condition consistently ranks lowest across all qualitative measures. Interpreting these results through the Interaction Fidelity Model, we offer design implications for VR applications.
When less can be more: Evaluating the impact of animated and interactive demonstrations in voice-assisted counting games for young children
Sulakna Karunaratna, Daniel Vargas-Diaz, Jisun Kim, Jenny Wang, Sang Won Lee, Koeun Choi
Early counting forms a critical foundation for numeracy, involving coordination of visual representations, verbal number words, and physical actions such as pointing. Designing effective technologies for young children, therefore requires careful calibration of multimodal features. This study investigated how different levels of demonstrations paired with a voice assistant — static (baseline: image+voice), animated (animation+voice), and interactive (touch+animation+voice) — influence counting-related understanding and engagement in 2 – 4-year-olds. We developed a tablet-based counting game and conducted a within-subjects study with 32 children. Results showed that animated demonstration improved cardinal number word understanding over both baseline and the interactive demonstration. Analyses of verbal counting engagement showed that concurrent touch demands increased cognitive load, limiting children’s counting aloud. These findings suggest that more interactivity does not always yield better outcomes for young learners. We contribute empirical evidence and design guidance: voice+animation supports early counting, while touch interactivity should be lightweight and age-appropriate, informing the design of multimodal voice-assisted applications.
Details of workshops
Herding CATs: Making sense of creative activity traces
Max Kreminski, John Joon Young Chung, Qian Yang, Noor Hammad, Shm Garanganao Almeda, Amy Smith, Kihoon Son, Sang Won Lee, Eric Rawn, J.D. Zamfirescu-Pereira
This workshop aims to advance the analysis of creative activity traces, particularly those captured through user interaction with software creativity support tools (CSTs). Traces of creative activity constitute a rich resource for identifying the impacts of CSTs — especially AI-based CSTs — on the creative process, and may also inform general-purpose process theories of creativity. Several new approaches to making sense of these traces have been introduced in the past few years, but many of these approaches have emerged from largely disjoint research communities, hindering the development of a shared analytical toolkit. We propose to gather HCI and creativity researchers, including proponents of several different trace analysis techniques, to sketch out a technique design space to guide future empirically grounded research on creativity support.
Details of workshop papers
AI-augmented human decision-making in secure space operations
Sadman Saif, Fatemeh Sarshartehrani, Muhammad Zeeshan Karamat, Yahia Tawfik, Bo Ji, Brendan David-John, Christiana Chamon Garcia
This position paper envisions an integrated, AI-augmented human-machine environment designed to help operators in space (e.g., astronauts) sustain balanced workloads and stable cognitive functioning under extreme operational pressure. The proposed system combines machine-learning-based cognitive state estimation, explainable telemetry anomaly and cyber-attack detection, a large language model for guidance and assistance, and augmented-reality (AR) Heads-up Display (HUD) interfaces that mediate information flow in real time. We have developed and evaluated components, including near-real-time cognitive load monitoring and lightweight, explainable anomaly detection, to tackle the security concerns. Building on this, our long-term goal is to design critical data visualizations and AI agent support for the HUD and systematically assess their effects, advancing our broader vision of AI-supported AR interfaces in safety and security-critical environments, such as space operations.
Deceptive patterns in immersive environments: How XR can expand markets but expose sensitive information
G. Nikki Alabanza, Salem Alabanza, Brendan David-John
Extended Reality (XR) offers a unique and immersive experience in which advertisers and developers can present products to consumers. Microtransactions exist in video games to further revenue generated by video games and allow players access to cosmetics, additional content, or game progression. However, such marketing strategies employed by video game developers and companies present a financial burden on players and, potentially, their families. By leveraging user behavior and biometric data collected by XR devices, advertisers and developers can determine user demographic information, cognitive state, and present advertisements tailored to the user’s specific needs at the opportune time to make a sale. As developers and companies expand data collection and restrict customization of privacy settings, it is important to highlight these changes as vulnerable populations — specifically children — are directly affected and exposed to privacy risks. In this position paper, we discuss advertising techniques present in existing immersive spaces (i.e., video games), posit opportunities for dark patterns and manipulative design in XR, and discuss suggestions for further research into possible exploitation in the space of XR.
From explainable AI to human-centered system reliability: quantifying and visualizing calibrated trust in mission-critical XR
Matthew Wilchek, Kurt Luther, Feras A. Batarseh
Extended Reality (XR) systems increasingly integrate Artificial Intelligence (AI) to support professionals in high-risk settings such as search and rescue, law enforcement, and military operations. Yet common approaches to trust and explainability often rely on qualitative assessments or offline explanations that are poorly suited to embodied, time-critical work. This paper outlines an emerging perspective on how mission-critical XR systems may better support calibrated trust, resilient oversight, and situated explainability through human-centered system reliability (HCSR), a quantitative, user-parameterized estimate of reliability that evolves through accumulated evidence. Drawing on prior work in distributed XR and human-AI teaming, we describe three connected shifts: from generic trust to calibrated trust through updated reliability estimates; from seamlessness to resilience through human-centered oversight; and from transparency to situated explainability through lightweight spatial cues embedded in the XR interface. We conclude with implications and open challenges for reliability-aware XR systems.
Redistributing interdependence in organizational knowledge work: Lessons from deploying an LLM mediator in research labs
Sangwook Lee, Sang Won Lee
Large Language Models (LLMs) are increasingly embedded in collaborative workflows, yet their structural effects on the relationships between human stakeholders in data-intensive knowledge work remain underexplored. In this position paper, we reinterpret findings from a month-long field deployment of an LLM-based chatbot that mediates organizational memory across four university research labs (N=21) through the lens of Interdependence Theory. Our analysis reveals two key dynamics. First, the LLM redistributes the dependence structure between students and lab directors: while students gain autonomous access to institutional knowledge, directors lose visibility into knowledge gaps, shifting bilateral dependence toward unilateral dependence. This redistribution is moderated by organizational culture, amplifying mutual responsiveness in psychologically safe environments while dampening it where students default to private interaction. Second, the system fails to support the transformation of motivation needed for collaborative knowledge stewardship: students consistently avoid contributing to documentation due to role perceptions and temporal asymmetry between individual costs and collective benefits. We derive design implications including privacy-preserving awareness mechanisms, graduated contribution pathways, and designing for the LLM’s dual role as a boundary object across stakeholder groups.
Towards supporting mediators in human-agent collaboration
Anirban Mukhopadhyay, Kurt Luther
AI agents are increasingly positioned as collaborators in team settings, but their success depends on more than task performance. In this position paper, we argue that effective human-agent collaboration requires supporting the underlying social and cognitive processes that enable teamwork. Building on our empirical study of proactive AI agents in small-team, time-sensitive collaborative tasks, we highlight recurring challenges related to intervention content, timing, and position. We frame these challenges through the Input-Mediator-Output-Input (IMOI) model from organizational psychology, which emphasizes mediating processes such as trust, safety, shared mental models, transactive memory, and communication among others. We show how breakdowns in these mediators explain why agents sometimes disrupt collaboration despite strong technical capabilities. We outline design considerations that can position agents as successful remote collaborators by supporting team processes like planning, structuring, and adaptation.
TUNING: Adaptive GUI modification as agent tools in task-oriented chatbots
Sangwook Lee, Sang Won Lee
Task-oriented chatbots can provide graphical user interface (GUI) components, such as button groups, calendars, and seat maps, alongside conversational interaction at each step of a structured workflow to facilitate user interaction. GUIs reduce cognitive load and increase efficiency by visually presenting available options and supporting structured choices that are difficult through conversation alone. In such GUI-augmented chatbots, when users express preferences or requests in natural language, a design question arises: how should the system bridge the gap between conversational user interaction and structured GUI interaction when they are available at the same time? Current approaches either fully automate UI operations on behalf of the user, removing user agency; or generate entirely new interfaces via LLMs, discarding the carefully designed original context. In this position paper, we propose an alternative approach: an agent layer that adaptively modifies existing GUIs to support decision-making when preferences are ambiguous or subjective, rather than making selections for the user, while performing standard GUI interactions on the user’s behalf when intent is clear. To demonstrate the approach, we present TUNING (Task-oriented UI Notation for Informed Nudging and Guidance), an LLM agent equipped with four lightweight GUI adaptation tools (highlight, sort, filter, and augment) that modify the visual presentation of existing UI components without requiring backend access. Through a movie ticket booking scenario, we illustrate how these adaptations provide visual cues that help users make informed decisions while maintaining their agency over the task. We argue that this “adapt before automate” approach occupies a middle ground that preserves user agency in the design space of GUI agents.
Visceral notices: Rethinking consent for passive sensing in augmented reality
Nissi Otoo, G. Nikki Ramirez, Evan Slinger, Shaun Foster, Brendan David-John
Our team has researched visceral interfaces, immersive VR/AR visualizations of gaze data building on the concept of visceral privacy notice. However, recent studies suggest there is still a gap in users’ protective data sharing decisions. If experiential, spatial, real-time visualization, the most immersive form of notice possible, still fails to translate privacy awareness into protective behavior, what does this imply for meaningful consent? In this position paper, we argue for consequence-showing interfaces that visualize what gaze data reveals, stakes-based default protections in high-risk contexts, and consent mechanisms triggered at inference generation rather than data collection.
Details of poster presentations
Human-AI interaction in IXR: Design considerations from experts
Shokoufeh Bozorgmehrian, Joseph L. Gabbard, Shakiba Davari, Doug Bowman
The integration of artificial intelligence (AI) into extended reality (XR) systems enables new forms of intelligent adaptation. However, to enhance user interaction in XR, human-centered design decision making is required. Despite increasing interest in intelligent XR (IXR), there is limited expert-driven understanding of how XR designers conceptualize useful intelligent adaptation across different task domains. This paper presents an exploratory qualitative study of XR experts’ design considerations for IXR systems. Data was collected through a roundtable session at iXR Workshop (1st Workshop on Intelligent XR: Harnessing AI for Next-Generation XR User Experiences (iXR)) using a structured open-ended questionnaire and moderated group discussions. Fifteen XR experts developed and reflected on three scenario-based use cases representing industrial operation, social communication, and creative ideation. Our analysis synthesizes XR experts’ perspectives across IXR feature levels and task domains to support early-stage, domain-specific IXR system design.
Reframing ambiguity as discovery: Non-punitive feedback in tangible vocabulary learning
Siying Hu,Jian Ma, Xiangzhe Yuan, Jiajun Wang, Zhenhao Zhang, Emma Jiren Wang
During past decades, digital literacy tools have been consistently used to facilitate vocabulary acquisition. Recently, with the rise of gamified learning, existing systems often rely on binary correct-or-incorrect feedback mechanisms. However, there were rarely any work that harnessed the ambiguity of learner errors as a resource for discovery rather than failure. To that end we designed Vocabuild, a projection-augmented tangible interface that supports non-punitive feedback. We evaluated the system through a qualitative study with novice learners. Preliminary results showed that the tangible manipulation reframed error-making as an embodied exploratory process, enabling users to engage in active hypothesis testing and meaning negotiation.
Speaker-aware affective captioning for multi-speaker STEM talk in inclusive classrooms
Sunday Ubur, Denis Gracanin, Enoch Akli, Fatemeh Sarshartehrani, Sikiru Adewale
Live captioning supports inclusive classrooms, yet typically collapses multi-speaker STEM (science, technology, engineering, mathematics) talk into a single text stream, obscuring who spoke and how it was said. We present Speaker-Aware Affective Captioning (SAAC), a front-end that pairs speaker-attributed captions with compact vocal-affect cues and an on-demand AI Describe summary to clarify intent. In a within-subject pilot study (n=16), SAAC improved comprehension and was preferred overall, while AI Describe served as a recovery scaffold and emotion tags as an optional ‘gist' layer. SAAC illustrates how to translate backend diarization, emotion recognition, and language modeling into a cognitively supportive caption view for DHH learners in rapid, multi-party instruction/discussion.
Unpacking empathy development in HCI learners: Patterns from diary reflections and peer discussions
Jixiang Fan, Wei-Lu Wang, Jiahui Song, Jiacheng Zhao, Lei Xia, D. Scott McCrickard
Understanding user experience and user needs is an essential goal in HCI education, yet novice students often struggle to recognize the diversity and complexity of user perspectives. This study investigates how empathy-related understanding unfolds during early HCI learning through diary reflections and peer discussions. Ten undergraduate students recorded their daily experiences using a fitness application and later shared their reflections in small-group discussions. We analyzed open-ended responses together with multi-stage Empathy in Design Scale data. Our findings center on three aspects. The four empathy dimensions, Emotional Interest, Sensitivity, Personal Experience, and Self-awareness, showed descriptive shifts rather than a single trend. Students gradually realized that their initial assumptions about user needs and design difficulty were not sufficient. They also moved from a function-oriented view toward a more holistic understanding of product experience. These findings offer early insight into how reflective writing and peer dialogue influence the development of empathy.
Visualizing 30 years of CHI research with generative AI
Sunggyeol Oh, Andrew Katz, D. Scott McCrickard
CHI has grown from roughly 60 papers annually in 1996 to over 1,000 in 2025, creating a corpus too large for manual thematic tracking. We present a methodology that combines Generative AI-assisted coding, text embedding, and multi-stage clustering to organize 11,847 CHI paper abstracts into 921 themes under 26 meta-themes. Bootstrap analysis confirms stability (Adjusted Rand Index [ARI] = 0.91) and 100% corpus coverage. Two complementary visualizations reveal distinct aspects of field evolution: a streamgraph showing absolute volume changes and a heatmap showing proportional shifts. Together, they distinguish growth from shifting priorities. Human-AI interaction increased by 60 times in absolute terms and by 5 times proportionally. At the same time, User-Centered Design Research grew modestly in absolute terms while its relative share fell to roughly a third, suggesting maturation into a foundational practice. These patterns, invisible at this scale, demonstrate how Generative AI-powered methods can reveal the dynamics of scholarly evolution.