CHCI@VT Research at IEEE ISMAR 2025
October 7, 2025

Several CHCI@VT faculty and students will present their research at the 24th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2025), October 8-12, in Daejeon, South Korea, including numerous conference roles, 6 research papers, 3 workshop papers, and 1 demonstration. IEEE ISMAR is the premier conference for Augmented Reality (AR), Mixed Reality (MR) and Virtual Reality (VR), which attracts the world’s leading researchers from both academia and industry. ISMAR explores the advances in commercial and research activities related to AR, MR, and VR by continuing to expand its scope over the past several years.=
CHCI@VT faculty and students are bolded.
Conference Roles
- General Chairs: Joseph L. Gabbard
- Associate Paper Chairs: Brendan David-John
- International Program Committee Members: Ryan P. McMahan
- IDEATExR Workshop Organizers: G. Nikki Ramirez
- TRUST-XR Workshop Organizers: Brendan David-John
- XRWORKS Keynote Speaker: Doug A. Bowman
List of Research Papers
- Augmented Reality Visualization Techniques for Search and Rescue: Findings from a User Study with Subject Matter Experts
- Exploring Organizational Strategies in Immersive Computational Notebooks
- MAGIC: A Method for Analyzing the Grammar of Incomplete Cues
- Measuring Rotational Inertia in HMDs: Calculation of Torque as an Unobtrusive Indicator of Expended Effort in Virtual Environments
- Revisiting Performance Models of Distal Pointing Tasks in Virtual Reality
- Visceral Notices and Privacy Mechanisms for Eye Tracking in Augmented Reality
List of Workshop Papers
- Rethinking Privacy Indicators in Extended Reality: Multimodal Design for Situationally Impaired Bystanders
- Simulations for Augmented Reality Evaluation of Tools for Mass Casualty Incident Triage
- Traveling from Fiction to Future: Ethical Design Principles for AI-Integrated XR Workplaces
List of Demonstrations
Details of Research Papers
Augmented Reality Visualization Techniques for Search and Rescue: Findings from a User Study with Subject Matter Experts
Kelsey Quinn, Joseph L. Gabbard, Enricoandrea Laviola, John Luksas

Our research explored the use of Whiskers, Wedge, and Compass augmented reality (AR) visualization techniques (VTs) in an outdoor field-based user study conducted with Search and Rescue (SAR) subject matter experts (SMEs). The goal of this study was to understand what hazard-based out-of-view information SAR responders need and how SAR SMEs prefer information to be presented. We investigated why certain design elements and preattentive properties of these visualization techniques are most preferred for use during a SAR mission. A key aim was to gain insight on what AR design elements can help guide SAR responders away from dangerous out-of-view hazards, while still allowing them to efficiently search for victims. We derived a set of AR VT design guidelines, specifically applicable to dynamic, potentially dangerous environments, through semi-structured interviews with SMEs and thematic analysis of data. We additionally identified how SAR SMEs envision these AR VTs fitting in with the current SAR ecosystem.
Exploring Organizational Strategies in Immersive Computational Notebooks
Sungwon In, Ayush Roy, Eric Krokos, Kirsten Whitley, Chris North, Yalong Yang

Computational notebooks, which integrate code, documentation, tags, and visualizations into a single document, have become increasingly popular for data analysis tasks. With the advent of immersive technologies, these notebooks have evolved into a new paradigm, enabling more interactive and intuitive ways to perform data analysis. Immersive Computational Notebook (ICoN), which integrates computational notebooks within an immersive environment, significantly enhances navigation performance with embodied interactions. However, despite recognizing the significance of organizational strategies in the immersive data science process, the organizational strategies for using ICoN remain largely unexplored. In response, our research aims to deepen our understanding of spatial structures for computational notebooks and to examine how various execution orders can be visualized in an immersive context. Through an exploratory user study, participants preferred organizing in half-cylindrical structures and engaged significantly more in non-linear analysis. This suggests a shift in how data analysts manage computational notebooks within immersive environments.
MAGIC: A Method for Analyzing the Grammar of Incomplete Cues
Xinyu Hu, Joseph LaViola, Ryan P. McMahan

Augmented reality (AR) and virtual reality (VR) applications commonly employ interaction cues that denote to the user what interaction to take. In this paper, we present a Method for Analyzing the Grammar of Incomplete Cues (MAGIC), which provides an approach for evaluating the design of interaction cues based on the completeness or incompleteness of the functional grammar that they convey through perceptual stimuli. To demonstrate the importance of complete cues, we also present a user study investigating the effects of complete and incomplete cues on which interactions participants choose. The results indicate that incomplete cues do not afford sufficient information, so users make assumptions about the intended interactions. Furthermore, the results indicate that users are more likely to choose intended interactions when the cues are complete. Hence, we present MAGIC as a potentially useful tool for helping interaction designers avoid usability issues with incomplete interaction cues.
Measuring Rotational Inertia in HMDs: Calculation of Torque as an Unobtrusive Indicator of Expended Effort in Virtual Environments
Jared Van Dam, Matt Werner, Kyle Tanous, Joseph L. Gabbard

In this paper, we detail a novel method for calculating torque levels generated by users wearing a head-mounted display (HMD). This method considers not only the movement from a given user but also the weight characteristics of the specific configured headset. In this way, we can calculate a user’s expended effort during a specific time frame to better understand the expended effort as well as the risk of fatigue or injury while using augmented and virtual reality HMDs. To illustrate the applicability of the method, we apply it in an initial user study analyzing torque and force during a basic target selec- tion task and determined that (1) both force and torque decrease significantly over time as participants tire, and (2) the lighter HMD incurred lower levels of force and torque throughout the various tri- als. This research substantially furthers the practice of human fac- tors and usability in augmented and virtual reality and fills a need within the community by adding to the extant literature an easy to use and well-documented method that can determine the effect of the total HMD system (HMD hardware, HMD interface, and task elements) on end users over both long- and short-duration tasks.
Revisiting Performance Models of Distal Pointing Tasks in Virtual Reality
Logan Lane, Feiyu Lu, Shakiba Davari, Robert J. Teather, Doug A. Bowman

Performance models of interaction, such as Fitts’ law, are important tools for predicting and explaining human motor performance and for designing high-performance user interfaces. Extensive prior work has proposed such models for the 3D interaction task of distal pointing, in which the user points their hand or a device at a distant target in order to select it. However, there is no consensus on how to compute the index of difficulty for distal pointing tasks. We present a preliminary study suggesting that existing models may not be sufficient to model distal pointing performance with current virtual reality technologies. Based on these results, we hypothesized that both the form of the model and the standard method for collecting empirical data for pointing tasks might need to change in order to achieve a more accurate and valid distal pointing model. In our main study, we used a new methodology to collect distal pointing data and evaluated traditional models, purely ballistic models, and two-part models. Ultimately, we found that the best model used a simple Fitts’-law-style index of difficulty with angular measures of amplitude and width.
Visceral Notices and Privacy Mechanisms for Eye Tracking in Augmented Reality
Nissi Otoo, Kailon Blue, G. Nikki Ramirez, Evan Selinger, Shaun Foster, Brendan David-John

Head-worn augmented reality (AR) continues to evolve through critical advancements in power optimizations, AI capabilities, and naturalistic user interactions. Eye-tracking sensors play a key role in these advancements. At the same time, eye-tracking data is not well understood by users and can reveal sensitive information. Our work contributes visualizations based on visceral notice to increase privacy awareness of eye-tracking data in AR. We also evaluated user perceptions towards privacy noise mechanisms applied to gaze data visualized through these visceral interfaces. While privacy mechanisms have been evaluated against privacy attacks, we are the first to evaluate them subjectively and understand their influence on data-sharing attitudes. Despite our participants being highly concerned with eye-tracking privacy risks, we found 47\% of our participants still felt comfortable sharing raw data. When applying privacy noise, 70\% to 76\% felt comfortable sharing their gaze data for the Weighted Smoothing and Gaussian Noise privacy mechanisms, respectively. This implies that participants are still willing to share raw gaze data even though overall data-sharing sentiments decreased after experiencing the visceral interfaces and privacy mechanisms. Our work implies that increased access and understanding of privacy mechanisms are critical for gaze-based AR applications; further research is needed to develop visualizations and experiences that relay additional information about how raw gaze data can be used for sensitive inferences, such as age, gender, and ethnicity. We intend to open-source our codebase to provide AR developers and platforms with the ability to better inform users about privacy concerns and provide access to privacy mechanisms.
Details of Workshop Papers
Rethinking Privacy Indicators in Extended Reality: Multimodal Design for Situationally Impaired Bystanders
Syed Ibrahim Mustafa Shah Bukhari, Maha Sajid, Bo Ji, Brendan David-John

As Extended Reality (XR) devices become increasingly prevalent in everyday settings, they raise significant privacy concerns for bystanders: individuals in the vicinity of an XR device during its use, whom the device sensors may accidentally capture. Current privacy indicators, such as small LEDs, often presume that bystanders are attentive enough to interpret the privacy signals. However, these cues can be easily overlooked when bystanders are distracted or have limited vision. We define such individuals as situationally impaired bystanders. This study explores XR privacy indicator designs that are effective for situationally impaired bystanders. A focus group with eight participants was conducted to design five novel privacy indicators. We evaluated these designs through a user study with seven additional participants. Our results show that visual-only indicators, typical in commercial XR devices, received low ratings for perceived usefulness in impairment scenarios. In contrast, multimodal indicators were preferred in privacy-sensitive scenarios with situationally impaired bystanders. Ultimately, our results highlight the need to move toward adaptable, multimodal, and situationally aware designs that effectively support bystander privacy in everyday XR environments.
Simulations for Augmented Reality Evaluation of Tools for Mass Casualty Incident Triage
Cassidy R. Nelson, Joseph L. Gabbard, Jason B. Moats, Ranjana K. Mehta
Mass casualty incidents (MCIs) are a high-risk, sensitive domain with profound implications for patient and responder safety. Augmented reality has shown promise as an assistive tool for high- stress work domains and MCI triage both in the field and for pre- field training. However, the vulnerability of MCIs makes it challenging to evaluate new tools designed to enhance MCI response. In other words, profound evolutions like the integration of augmented reality into field response require thorough proof-of- concept evaluations before being launched into real-world response. This paper describes two progressive simulation strategies for augmented reality that bridge the gap between computer-based simulation and actual field response.
Traveling from Fiction to Future: Ethical Design Principles for AI-Integrated XR Workplaces
Esha Mahendran

As extended reality (XR) technologies become increasingly integrated into workplace environments, ethical and human centered design considerations grow more critical for society. Fictional narratives such as Psycho-Pass, Detroit: Become Human, and Black Mirror provide illustrations of artificial intelligence (AI) integrated systems that adapt to context, sense emotions, and mediate human behavior. Drawing on these speculative sources, this paper seeks to propose a framework of ethical design strategies for XR workplaces in three domains: adaptive interfaces, multimodal feedback, and embodied cognitive support. By bridging fiction and design, the paper advocates for emotionally responsive systems that enhance user agency and well-being while avoiding the pitfalls of surveillance, overload, and autonomy mismanagement. The goal is to reframe dystopian warnings into constructive guidelines that promote inclusive, trustworthy, and sustainable XR work environments.
Details of Demonstrations
Demonstration of Visceral Notices and Privacy Mechanisms for Eye Tracking in Augmented Reality
Nissi Otoo, Kailon Blue, G. Nikki Ramirez, Evan Selinger, Shaun Foster, Brendan David-John
We demonstrate visceral interfaces (VIs) and privacy mechanisms that make eye tracking in augmented reality (AR) more transparent and understandable through data visualization. VIs are visual overlays that indicate when and how gaze data is collected, designed to increase privacy awareness. We implement three privacy mechanisms (Gaussian noise, weighted smoothing, and temporal downsampling) that perturb gaze data and visualize their impact on user perceptions of data sharing. The demo runs on Magic Leap 2 and includes an art gallery and a gaze selection task scenario. Participants explore combinations of VIs and privacy mechanisms, contributing to more transparent, privacy-aware AR systems.