CHCI contributions to IEEE VR 2026
March 12, 2026
The IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) is the premier international event for the presentation of research results in the broad area of virtual, augmented, and mixed reality (VR/AR/MR). This year’s edition will take place March 21-24, 2026 in Daegu, Korea. IEEE VR brings together leaders, innovators, and influencers to disseminate the latest research and advancements being developed from around the world.
Once again, CHCI members have made multiple contributions to IEEE VR. Our faculty and students have authored three full papers, a workshop paper, and a demonstration showcasing their research in virtual reality, extended reality (XR), and 3D user interfaces. Their work touches on topics such as scale navigation for XR presentations, evaluating additive models to predict task completion for 3D interactions, breaking anonymity in XR, and VR manipulation techniques for aligning complex 3D objects. Members were tapped to help organize a workshop, sit on the service award committee, act as a forum panelist, and serve as a keynote speaker.
Leadership Roles
- International Program Committee: Lee Lisle
- Reviewers: Kiran Bagalkotkar, Juanita Benjamin, Doug Bowman, Brendan David-John, Alexander Giovannelli, Sungwon In, Logan Lane, Wallace Morris, G. Nikki Ramirez, Francielly Rodrigues, Ayush Roy, Maha Sajid, Ibrahim Tahmid
- VGTC Service Award Committee: Doug Bowman
- Future Faculty Forum Panelist: Doug Bowman
- IDEATExR Workshop Organizer: G. Nikki Ramirez
- XRIOS Workshop Co-Organizers: Myounghoon Jeon, Masiath Mubassira
- XRIOS Workshop Keynote: Doug Bowman
Scholoary Contributions
Full Papers
- Re-evaluating Virtual Reality Manipulation Techniques for Precise Alignment of Complex 3D Objects (Journal Paper)
- CHOP: Breaking Anonymity in XR through a Novel and Cost-effective Chain of Privacy Attacks and Differential Privacy-Based Defenses
- Evaluating the Viability of Additive Models to Predict Task Completion Time for 3D Interactions in Augmented Reality
Workshop Papers
Research Demonstrations
Details of Full Papers
Re-evaluating Virtual Reality Manipulation Techniques for Precise Alignment of Complex 3D Objects (Journal Paper)
Cherelle Connor, Alexander Giovannelli, Leonardo Pavanatto, Francielly Rodrigues, Haichao Miao, Vuthea Chheang, Brian Giera, Peer-Timo Bremer, Doug A. Bowman
Prior research has developed a number of manipulation techniques that can achieve precise object placement in virtual reality, but studies of these techniques typically use simple objects. We conducted a study comparing two existing techniques, (AMP-IT and WISDOM), during alignment of objects with complex geometry to evaluate the potential influence of geometric complexity on performance, usability, workload and preference. Our findings indicate that participants had faster completion times and higher trial completion rates with AMP-IT on high-precision alignment tasks, contrary to earlier findings that used simple objects. Yet WISDOM is still preferred and considered more usable, despite increased workload and poorer performance, exposing participants’ willingness to trade objective performance for comfort during use.
CHOP: Breaking Anonymity in XR through a Novel and Cost-effective Chain of Privacy Attacks and Differential Privacy-Based Defenses
Ripan Kumar Kundu, Brendan David-John, Khaza Anuarul Hoque
The convergence of artificial intelligence (AI) and extended reality (XR) technologies (AIXR) promises innovative applications across many domains. However, the sensitive nature of data (e.g., eye-tracking) used in these systems also raises significant privacy concerns, as adversaries can exploit this data and these models to infer personal information. Prior research has primarily examined membership inference attacks (MIA) to leak privacy at the model-level and re-identification attacks (RDA) at the dataset-level, separately as individual attacks. While these attacks are relevant to the XR domain, launching these attacks as individual attacks is not practical and incurs more attack cost. To address this gap, we present the first comprehensive study of chain of privacy (CHOP) attacks against AIXR applications. We demonstrate how adversaries can launch such attacks with a high success rate, in a cost-effective way, by sequentially combining MIA and Attribute inference attacks (AIA) to re-identify XR users without access to raw XR data, training distributions, or model parameters. We evaluate our proposed method in realistic AIXR settings by adopting deep learning (DL)-based cybersickness detection as a representative AIXR application.
Specifically, we train two state-of-the-art DL models on two open-source datasets: Simulation 2021 and VRWalking, and a new XR cybersickness dataset constructed from 34 participants via a user study. Our findings reveal that the proposed CHOP attacks pose severe risks to DL-based cybersickness detection, achieving re-identification rates of up to 94% and 97% on the open-source and the developed cross-linked datasets, respectively, underscoring the feasibility and severity of cross-dataset privacy violations. Furthermore, cost analysis reveals that the proposed CHOP attack is ~ 2x more cost-effective than traditional individual attacks for re-identifying XR users. Finally, we propose two epsilon-differential privacy (DP)-enabled privacy-preserving mechanisms: Differentially Private Stochastic Gradient Descent (DPSGD) and Private Aggregation of Teacher Ensembles (PATE) to mitigate CHOP attacks. Our results show that the proposed defense reduces the re-identification rate by up to 88% and 79% while maintaining high model utility, with classification accuracies of up to 94% and 92% for the same datasets using Transformer models.
Evaluating the Viability of Additive Models to Predict Task Completion Time for 3D Interactions in Augmented Reality
Logan Lane, Ibrahim Tahmid, Feiyu Lu, Doug A. Bowman
Additive models of interaction performance, such as the Keystroke-Level Model (KLM), are tools that allow designers to compare and optimize the performance of user interfaces by summing the predicted times for the atomic components of a specific interaction to predict the total time it would take to complete that interaction. There has been extensive work in creating such additive models for 2D interfaces, but this approach has rarely been explored for 3D user interfaces. We propose a KLM-style additive model, based on existing atomic task models in the literature, to predict task completion time for 3D interaction tasks. We performed two studies to evaluate the feasibility of this approach across multiple input modalities, with one study using a simple menu selection task and the other a more complex manipulation task. We found that several of the models from the literature predicted actual task performance with less than 20% error in both the menu selection and manipulation study. Overall, we found that additive models can predict both absolute and relative performance of input modalities with reasonable accuracy.
Details of Workshop Papers
Audio-Visual Augmented Reality: Enhancing Visual Art Experiences through Sonification and Visualization
Yeaji Lee, Yanlan Cai, Brady Li, Wallace Santos Lages, Myounghoon Jeon
The ways of art appreciation can be extended through augmented reality technologies. In the current study, augmented audio and augmented visual features enabled viewers to appreciate artworks in greater depth with a novel understanding. Twenty-four participants appreciated each artwork under four different conditions (with/without audio augmentation and with/without visual augmentation). Art appreciation time and interview data were collected. The results indicated that augmented visual elements led to significantly higher engagement time. Thematic analysis revealed that both augmented audio and augmented visual elements structurally aligned with the artworks, guiding viewers in how to explore and appreciate the paintings. These findings demonstrate that AR technologies can facilitate deeper, more structured engagement with artworks, while also supporting viewer-specific cognitive strategies for art appreciation.
Details of Research Demonstrations
From Slides to Space: Interactive Scale Navigation for XR Presentation
Matthew Gallagher, Mason Szczesniak, Francielly Rodrigues, Nakul Kumar, Jasmine Walker, Hamid Tarashiyoun, Doug A. Bowman
This paper presents our solution to the 2026 3DUI Contest challenge. Our approach features scale-based navigation and spatialized regions to traverse an immersive presentation, replacing traditional slide decks with embodied movement through a 3D virtual environment (VE). To demonstrate this approach, we designed an immersive presentation centered on the exploration of the solar system, where predetermined regions function as presentation slides. The user, acting as the presenter, exhibits content by scaling and moving the VE over the course of the tour, allowing for both large-scale context and close inspection of details while maintaining clear instructional flow. The system integrates presenter-oriented tools commonly used in conventional presentations along with visual and auditory feedback from audience participation. The guided tour provides narrative segments comparable to a presentation slide while allowing attendees to explore current and previously visited regions. This work explores how virtual reality (VR) can extend conventional presentation methods through spatial interaction and embodied navigation.