CHCI’s Contributions to IEEE VR 2025
March 11, 2025

The IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) is the premier international event for presenting research in virtual, augmented, and mixed reality. The 32nd edition will be held from March 8 to 12, 2025, in the historic city of Saint-Malo, France. This year's conference introduces a new track dedicated to artistic creation and emphasizes sustainability, reflecting the community's commitment to innovation and positive global impact.
The Center for Human-Computer Interaction (CHCI) has a strong presence at IEEE VR 2025, contributing to many aspects of the conference and receiving several prestigious awards. Joe Gabbard was inducted into the VR Academy, recognizing his impact on virtual reality research. Doug Bowman received the Service Award for his dedication and contributions to the community. Brendan David-John and Feiyu Lu received the Best Dissertation Award for their outstanding doctoral research.
CHCI members have authored five full papers, two posters, and three workshop papers, showcasing their research in virtual reality and 3D user interfaces. Their work spans topics such as adaptive co-piloting, collaborative virtual environments, multiscale navigation, immersive educational experiences, and augmented reality collaboration. Members were also part of program committees and served as workshop organizers.
Leadership Roles
Associate Program Chair: Joe Gabbard
Super Program Committee Member and IEEE VGTC Virtual Reality Significant New Researcher Award Chair: Ryan P. McMahan
Fifth Workshop on Inclusion, Diversity, Equity, Accessibility, Transparency and Ethics in XR (IDEATExR): Lee Lisle, G. Nikki Ramirez-Saffy (Organizers)
Future Faculty Program: Lee Lisle (Organizer); Joe Gabbard (mentor); Doug Bowman (mentor)
Poster Committee Member: Lee Lisle and Ibrahim Tahmid
Conference Committee Member: Lee Lisle and Ibrahim Tahmid
Scholarly Contributions
Full Papers:
Posters:
Workshop Papers:
Awards
Joseph Gabbard inducted into VR Academy

Professor Joseph Gabbard was inducted into the IEEE VGTC Virtual Reality Academy, a prestigious academy that highlights the accomplishments of the leaders in the field.
In the mid-2000’s, Prof. Gabbard’s work in AR interface design led to research that aimed to better understand visual perception while using optical see-through AR displays. For example, he was the first to systematically consider the usability implications of color blending, a term that he coined which has now become commonplace. Throughout his career, Prof. Gabbard has conducted many outdoor AR studies and explored a range of outdoor AR applications including automotive AR, emergency response, bridge inspection and search and rescue.
Prof. Gabbard has served in many leadership roles in the combined IEEE VR and ISMAR communities. He was general chair of IEEE ISMAR 2021, a program chair of ISMAR 2018, 2019, 2022 and 2023, and a program chair of VR 2019 and 2020. He served on the IEEE ISMAR steering committee and was co-chair for a 2023 satellite event for IEEE VR in Blacksburg, Virginia.
Doug Bowman receives the Service Award

The 2025 IEEE VGTC Virtual Reality Service Award goes to Doug A. Bowman of Virginia Tech, in recognition of his many years of service contributions to the VR/AR academic community. Prof. Bowman has served in leadership roles for the IEEE 3DUI, IEEE VR, and IEEE ISMAR conferences, including as one of the founding chairs of the IEEE Symposium on 3D User Interfaces (IEEE 3DUI). 3DUI was subsequently incorporated into the IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), of which Prof. Bowman was twice a general chair. The IEEE VGTC is pleased to award Doug A. Bowman the 2025 Virtual Reality Service Award.
Brendan David-John receives the Best Dissertation Award

Dr. Brendan David-John, assistant professor of Computer Science, received the VR Doctoral Dissertation Award, which recognizes outstanding academic research and development in the field.
The goal of Dr. David-John’s thesis is to provide provable privacy guarantees to protect against re-identification at various points of the eye-tracking pipeline while balancing utility of downstream XR applications. The research results and methods produced have a direct impact on virtual, augmented, and mixed-reality (XR) devices and datasets that depend on gaze data produced by eye trackers. The privacy mechanisms developed apply to critical sensors for the future of XR, and provide formal methods that protect against concerns from fast advancing AI techniques for making inferences on human data.
Feiyu Lu receives the Best Dissertation Award

Dr. Feiyu Lu, a 2023 Ph.D. graduate in Computer Science, who worked in the 3D Interaction Group with Doug Bowman, also received the VR Doctoral Dissertation Award.
Dr. Lu’s dissertation proposes methods for appropriate information displays and interactions with future all-day augmented reality head-worn displays (AR HWDs) by seeking answers to four questions: 1) how to mitigate distractions of AR content to the users; 2) how to prevent AR content from occluding the real-world environment; 3) how to support scalable on-the-go access to AR content; and (4) how everyday users perceive using AR systems for daily information acquisition tasks. The work builds upon a theory he developed called Glanceable AR, in which digital information is displayed outside the central field of view of the AR display. It distills valuable insights on how to enable non-distracting and easily accessible information displays in AR HWDs, how to empower users to interact with this information, and how to develop AR systems that enable ubiquitous and pervasive personal information overlay on top of the physical world in authentic everyday contexts.
AdaptiveCoPilot: Design and Testing of a Neuroadaptive System for Pilot Performance Augmentation
Shaoyue Wen, Michael Middleton, Songming Ping, Nayan N. Chawla, Guande Wu, Bradley S. Feest, Chihab Nadri, Yunmei Liu, David Kaber, Maryam Zahabi, Ryan P. McMahan, Sonia Castelo, Ryan McKendrick, Jing Qian, Claudio T. Silva

Pilots operating modern cockpits often face high cognitive demands due to complex interfaces and multitasking requirements, which can lead to overload and decreased performance. This study introduces AdaptiveCoPilot, a neuroadaptive guidance system that adapts visual, auditory, and textual cues in real time based on the pilot's cognitive workload, measured via functional Near-Infrared Spectroscopy (fNIRS). A formative study with expert pilots (N=3) identified adaptive rules for modality switching and information load adjustments during preflight tasks. These insights informed the design of AdaptiveCoPilot, which integrates cognitive state assessments, behavioral data, and adaptive strategies within a context-aware Large Language Model (LLM). The system was evaluated in a virtual reality (VR) simulated cockpit with licensed pilots (N=8), comparing its performance against baseline and random feedback conditions. The results indicate that the pilots using AdaptiveCoPilot exhibited higher rates of optimal cognitive load states on the facets of working memory and perception, along with reduced task completion times. Based on the formative study, experimental findings, qualitative interviews, we propose a set of strategies for future development of neuroadaptive pilot guidance systems and highlight the potential of neuroadaptive systems to enhance pilot performance and safety in aviation environments.
Investigating the Influence of Playback Interactivity during Guided Tours for Asynchronous Collaboration in Virtual Reality
Alexander Giovannelli, Leonardo Pavanatto, Shakiba Davari, Haichao Miao, Vuthea Chheang, Brian Giera, Peer-Timo Bremer, Doug Bowman

Collaborative virtual environments allow workers to contribute to team projects across space and time. While much research has closely examined the problem of working in different spaces at the same time, few have investigated the best practices for collaborating in those spaces at different times aside from textual and auditory annotations. We designed a system that allows experts to record a tour inside a virtual inspection space, preserving knowledge and providing later observers with insights through a 3D playback of the expert's inspection. We also created several interactions to ensure that observers are tracking the tour and remaining engaged. We conducted a user study to evaluate the influence of these interactions on an observing user's information recall and user experience. Findings indicate that independent viewpoint control during a tour enhances the user experience compared to fully passive playback and that additional interactivity can improve auditory and spatial recall of key information conveyed during the tour.
Exploring Multiscale Navigation of Homogeneous and Dense Objects with Progressive Refinement in Virtual Reality
Leonardo Pavanatto, Alexander Giovannelli, Brian Giera, Peer-Timo Bremer, Haichao Miao, Doug Bowman

Locating small features in a large, dense object in virtual reality (VR) poses a significant interaction challenge. While existing multiscale techniques support transitions between various levels of scale, they are not focused on handling dense, homogeneous objects with hidden features. We propose a novel approach that applies the concept of progressive refinement to VR navigation, enabling focused inspections. We conducted a user study where we varied two independent variables in our design, navigation style (Structured vs. Unstructured) and display mode (Selection vs. Everything), to better understand their effects on efficiency and awareness during multiscale navigation. Our results showed that unstructured navigation can be faster than structured and that displaying only the selection can be faster than displaying the entire object. However, using an everything display mode can support better location awareness and object understanding.
Spatial Bar: Exploring Window Switching Techniques for Large Virtual Displays
Leonardo Pavanatto, Jens Grubert, Doug A. Bowman

Virtual displays provided through head-worn displays (HWDs) offer users large screen space for productivity, but managing this space effectively presents challenges. Existing research shows that while increased screen space improves performance, it can also introduce significant window management overhead. This paper explores how to enhance window-switching strategies for virtual displays by lever- aging eye tracking provided by HWDs and underutilized spaces around the main display area. We investigate the efficiency and us- ability of different cursor behaviors and selection modes in a Spatial Bar interface for window-switching tasks in augmented reality environments. Our study involved two primary selection modes: Gaze and Cursor, each tested with two cursor behaviors: Teleport and Stay. We measured objective performance metrics, including task completion time and error rates, and subjective evaluations using the NASA TLX and custom questionnaires. Results show that cursor stay, while familiar and comfortable for participants, was laborious and led to longer task completion times, particularly over large distances. Gaze Teleport led to the quickest window switching times, particularly in tasks where the original cursor position or the target window were far from the Spatial Bar.
“Just stop doing everything for now!”: Understanding security attacks in remote collaborative mixed reality
Maha Sajid, Syed Ibrahim Mustafa Shah Bukhari, Bo Ji, Brendan David-John

Mixed Reality (MR) devices are being increasingly adopted across a wide range of real-world applications, ranging from education and healthcare to remote work and entertainment. However, the unique immersive features of MR devices, such as 3D spatial interactions and the encapsulation of virtual objects by invisible elements, introduce new vulnerabilities leading to interaction obstruction and misdirection. We implemented latency, click redirection, object occlusion, and spatial occlusion attacks within a remote collaborative MR platform using the Microsoft HoloLens 2 and evaluated user behavior and mitigations through a user study. We compared responses to MR-specific attacks, which exploit the unique characteristics of remote collaborative immersive environments, and traditional security attacks implemented in MR. Our findings indicate that users generally exhibit lower recognition rates for immersive attacks (e.g., spatial occlusion) compared to attacks inspired by traditional ones (e.g., click redirection). Our results demonstrate a clear gap in user awareness and responses when collaborating remotely in MR environments. Our findings emphasize the importance of training users to recognize potential threats and enhanced security measures to maintain trust in remote collaborative MR systems.
Planet Purifiers: A Collaborative Immersive Experience Proposing New Modifications to HOMER and Fishing Reel Interaction Techniques
Alexander Giovannelli, Fionn Murphy, Trey Davis, Chaerin Lee, Rehema Abulikemu, Matthew Gallagher, Sahil Sharma, Lee Lisle, Doug Bowman

This paper presents our solution to the 2025 3DUI Contest challenge. We aimed to develop a collaborative, immersive experience that raises awareness about trash pollution in natural landscapes while enhancing traditional interaction techniques in virtual environments. To achieve these objectives, we created an engaging multiplayer game where one user collects harmful pollutants while the other user provides medication to impacted wildlife using enhancements to traditional interaction techniques: HOMER and Fishing Reel. We enhanced HOMER to use a cone volume to reduce the precise aiming required by a selection raycast to provide a more efficient means to collect pollutants at large distances, coined as Flow-Match. To improve the animal feed distribution to wildlife far away from the user with Fishing Reel, we created RAWR-XD, an asymmetric bi-manual technique to more conveniently adjust the reeling speed using the non-selecting wrist rotation of the user.
Evaluating the Impact of Sonification in an Immersive Analytics Environment Using Real-World Geophysical Datasets
DIsha Sardana, Lee Lisle, Denis Gracanin, Ico Bukvic, Kresimir Matkovic, Gregory Earle

In this paper, we evaluated the impact of audio in an immersive analytics environment using real-world geophysical datasets. We specifically used sonification, i.e., the use of non-speech audio to convey information. To evaluate the impact of sonification, we designed a between-subject experiment in a mixed-reality environment and conducted a user study with 50 participants in two scenarios: Audio-visual and Visual-only. In the study, we compared task metrics such as the number of patterns identified by the participants, the level of confidence, participants' task responses, the NASA task load index, and the SUS questionnaire between each scenario to study the role of sonification in augmenting the analysis process in an immersive environment. We found that event-based sonification, used in addition to the visual channel, is helpful in finding patterns and relations in geophysical datasets. Our results also suggest that using audio in immersive analytics might increase users' confidence in performing analytics tasks such as pattern finding. Using real-world datasets, we identified the advantages and limitations of using sonification in an immersive analytics context.
Exploring the Effects of Level of Control in the Initialization of Shared Whiteboarding Sessions in Collaborative Augmented Reality
Logan Lane, Jerald Thomas, Alexander Giovannelli, Ibrahim Tahmid, and Doug Bowman

Augmented Reality (AR) collaboration can benefit from a shared 2D surface, such as a whiteboard. However, many features of each collaborators physical environment must be considered in order to determine the best placement and shape of the shared surface. We explored the effects of three methods for beginning a collaborative whiteboarding session with varying levels of user control: Manual, Discrete Choice, and Automatic by conducting a simulated AR study within Virtual Reality (VR). In the Manual method, users draw their own surfaces directly in the environment until they agree on the placement; in the Discrete Choice method, the system provides three options for whiteboard size and location; and in the Automatic method, the system automatically creates a whiteboard that fits within each collaborators environment. We evaluate these three conditions in a study in which two collaborators used each method to begin collaboration sessions. After establishing a session, the users worked together to complete an affinity diagramming task using the shared whiteboard. We found that the majority of participants preferred to have direct control during the initialization of a new collaboration session, despite the additional workload induced by the Manual method.
From Voices to Worlds: Developing an AI-Powered Framework for 3D Object Generation in Augmented Reality
Majid Behravan, Denis Gracanin

This paper presents Matrix, an advanced AI-powered framework designed for real-time 3D object generation in Augmented Reality (AR) environments. By integrating a cutting-edge text-to-3D generative AI model, multilingual speech-to-text translation, and large language models (LLMs), the system enables seamless user interactions through spoken commands. The framework processes speech inputs, generates 3D objects, and provides object recommendations based on contextual understanding, enhancing AR experiences. A key feature of this framework is its ability to optimize 3D models by reducing mesh complexity, resulting in significantly smaller file sizes and faster processing on resource-constrained AR devices. Our approach addresses the challenges of high GPU usage, large model output sizes, and real-time system responsiveness, ensuring a smoother user experience. Moreover, the system is equipped with a pre-generated object repository, further reducing GPU load and improving efficiency. We demonstrate the practical applications of this framework in various fields such as education, design, and accessibility, and discuss future enhancements including image-to-3D conversion, environmental object detection, and multimodal support. The open-source nature of the framework promotes ongoing innovation and its utility across diverse industries.
The Value of Immersion in Co-present, Collaborative Safety Review
Nicholas F. Polys, Ayat Mohammed, Ashley Johnson, Nazila Roofigari-Esfahan
Young professionals are taught and trained in many ways; educators are continually seeking better ways to deliver engaging and relevant content. No exception is the field of Building and Construction, where undergraduate education faces a gap between classroom and practice. In this paper we describe an ongoing multidisciplinary collaboration focused on using Virtual and Augmented Reality to provide authentic educational experiences. Specifically, we created several cross-platform lab exercises that take students into dangerous construction situations where they must identify a number of safety violations. The students rotated through several visualization venues, each with different levels of immersion: from a laptop to a projector to a cave with 3 walls and a floor. Subjective survey data from the students indicates a significant positive effect of larger, interactive immersive displays.