Virginia Tech® home

CHCI Research Featured at ISMAR 2024

October 21 - 25, 2024

A vibrant and artistic poster for IEEE ISMAR 2024, featuring a scenic view of Seattle’s skyline with the Space Needle, Mt. Rainier in the background, and the city skyline stylized with intricate, colorful Pacific Northwest Indigenous artwork. The text "IEEE ISMAR 2024" and "Greater Seattle Area" is prominently displayed at the top. The overall aesthetic combines modern architecture with cultural art elements.

CHCI faculty and students have made significant contributions at the 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR), which will be held in the Greater Seattle Area, Washington, USA, from October 21 to 25, 2024.

ISMAR is the premier conference for Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR), which attracts the world’s leading researchers from both academia and industry. ISMAR explores the advances in commercial and research activities related to AR, MR, and VR by continuing to expand its scope over the past several years. 

Multiple CHCI faculty and students are serving in leadership roles, and CHCI members have contributed to six papers:

  1. Cultural Reflections in Virtual Reality: The Effects of User Ethnicity in Avatar Matching Experiences on Sense of Embodiment

  2. Investigating Object Translation in Room-scale, Handheld Virtual Reality

  3. Cross-Domain Gender Identification Using VR Tracking Data

  4. Visceral Interfaces for Privacy Awareness of Eye Tracking in VR

  5. Augmented Reality Visualization Techniques for Attention Guidance to Out-of-View Objects: A Systematic Review

  6. Addressing Human Factors Related to Artificial Intelligence Integrated Visual Cueing (iXR Workshop Paper)

Leadership Roles

  1. ISMAR Career Impact Award Committee: Doug Bowman

  2. S&T Programme Committee: Brendan David-JohnLee LisleRyan P. McMahan

  3. Doctoral Consortium: Lee Lisle (Co-Chair), Doug Bowman (Mentor)

  4. Future Faculty ForumRyan P. McMahan (Tutorial, Panelist), Doug Bowman (Panelist)

  5. 1st Workshop on Intelligent XR: Harnessing AI for Next-Generation XR User Experiences Organizers: Joe GabbardDoug Bowman, Shakiba Davari (Alumni), and Shokoufeh Bozorgmehrian

  6. IDEATExR: 4th Workshop on Inclusion, Diversity, Equity, Accessibility, Transparency, and Ethics in XR Organizers: Lee Lisle, Cassidy R. Nelson (Alumni)

Research Paper Presentations

(CHCI members in bold)

Journal Papers

  • Cultural Reflections in Virtual Reality: The Effects of User Ethnicity in Avatar Matching Experiences on Sense of Embodiment 

Tiffany D. Do, Juanita Benjamin, Camille Isabella Protko, and Ryan P. McMahan

Abstract: Matching avatar characteristics to a user can impact the sense of embodiment (SoE) in VR. However, few studies have examined how participant demographics may interact with these matching effects. We recruited a diverse and racially balanced sample of 78 participants to investigate the differences among participant groups when embodying both demographically matched and unmatched avatars. We found that participant ethnicity emerged as a significant factor, with Asian and Black participants reporting lower total SoE compared to Hispanic participants. Furthermore, we found that user ethnicity significantly influences ownership (a subscale of SoE), with Asian and Black participants exhibiting stronger effects of matched avatar ethnicity compared to White participants. Additionally, Hispanic participants showed no significant differences, suggesting complex dynamics in ethnic-racial identity. Our results also reveal significant main effects of matched avatar ethnicity and gender on SoE, indicating the importance of considering these factors in VR experiences. These findings contribute valuable insights into understanding the complex dynamics shaping VR experiences across different demographic groups.

This image displays a comparison of virtual avatars associated with four different ethnic groups: East Asian, Hispanic, Black, and White participants. Each group has a visual representation with avatars and a scale indicating the "Sense of Embodiment," with markers on the scale for different avatars reflecting varying levels of embodiment felt by the participants.
This figure shows how user ethnicity influences self-avatar matching effects and sense of embodiment (SoE). The green circle represents an avatar that has both ethnicity and gender matched, while the red circle represents an avatar that has neither. Blue circles represent avatars that have only one factor matched to the user. Asian and Black participants generally had lower SoE and had higher SoE with avatars that matched their ethnicity.
  • Investigating Object Translation in Room-scale, Handheld Virtual Reality

Daniel Enriquez, Hayoun Moon, Doug A. Bowman, Myounghoon Jeon, Sang Won Lee

Handheld devices have become an inclusive alternative to head-mounted displays in virtual reality (VR) environments, enhancing accessibility and allowing cross-device collaboration. Object manipulation techniques in 3D space with handheld devices, such as those in handheld augmented reality (AR), have been typically evaluated on a tabletop scale, and we currently need to understand how these techniques perform in larger-scale environments. We conducted two studies, each with 30 participants, to investigate how different techniques impact usability and performance for room-scale handheld VR object translations. We compared three translation techniques that are similar to commonly studied techniques in handheld AR: 3DSlide, VirtualGrasp, and Joystick. We also examined the effects of target size, target distance, and user mobility conditions (stationary vs. moving). Results indicated that the Joystick technique, which allowed translation in relation to the user’s perspective, was the fastest and most preferred, without difference in precision. Our findings provide insights for designing room-scale handheld VR systems, with potential implications for mixed reality systems involving handheld devices.

This image shows a person interacting with a virtual environment in various ways using a tablet. It is divided into seven panels (labeled A through G), depicting the user standing in a room, holding a tablet, and interacting with a virtual interface featuring 3D objects, grids, and arrows to manipulate the virtual scene.
(a) The environment of the user study with a user using the application. (b) A user superimposed into the VR environment. (c-d) A closeup of the user’s perspective with the user performing a translation. (e-g) The object translation tasks that were evaluated in the user studies: (e) 3DSlide, (f) VirtualGrasp, (g) Joystick.

Conference Papers

  • Cross-Domain Gender Identification Using VR Tracking Data

Qidi J. Wang, Alec G. Moore, Nayan N. Chawla, and Ryan P. McMahan

Abstract: Recently, much work has been done to research the personal identifiability of extended reality (XR) users. Many of these prior studies are task-specific and involve identifying users completing a specific XR task. On the other hand, some studies have been domain-specific and focus on identifying users completing different XR tasks from the same domain, such as watching 360° videos or assembling structures. In this paper, we present one of the few studies investigating cross-domain identification (i.e., identifying users completing XR tasks from different domains). To facilitate our investigation, we used open-source datasets from two different virtual reality (VR) studies—one from an assembly domain and one from a gaming domain—to investigate the feasibility of cross-domain gender identification, as personal identification is not possible between these datasets. The results of our machine learning experiments clearly demonstrate that cross-domain gender identification is more difficult than domain-specific gender identification. Furthermore, our results indicate that head position is important for gender identification and demonstrate that the k-nearest neighbors (kNN) algorithm is not suitable for cross-domain gender identification, which future researchers should be aware of.

This image shows two virtual reality scenes. On the left side, colorful pipes are assembled into a simple structure in a virtual environment. On the right side, a pair of hands equipped with futuristic gloves appears in a more realistic, immersive environment.
Images of the two domains that we explore gender identification across. The Full-scale Assembly Simulation Testbed shown on left provides an assembly domain for the FAST dataset. The video game “Half-Life: Alyx” shown on right provides a gaming domain for the “Who is Alyx?” dataset.
  • Visceral Interfaces for Privacy Awareness of Eye Tracking in VR

G. Nikki Ramirez-Saffy, Pratheep Kumar Chelladurai, Alances Vargas, Syed Ibrahim Mustafa Shah Bukhari, Evan Selinger, Shaun Foster, Brittan Heller, Brendan David-John

Abstract: Eye tracking is increasingly being integrated into virtual reality (VR) devices to support a wide range of applications. It is used as a method of interaction, to support performance optimizations, and to create adaptive training or narrative experiences. However, providing access to eye-tracking data also introduces the ability to monitor user activity, detect and classify a user’s biometric identity, or otherwise reveal sensitive information such as medical conditions. As this technology continues to evolve, users should be made aware of the amount of information they are sharing about themselves to developers and how it can be used. While traditional terms of service may relay this type of information, previous work indicates they do not accessibly convey privacy-related information to users. Considering this problem, we suggest the application of visceral interfaces that are designed to inform users about eye-tracking data within the VR experience. To this end, we designed and conducted a user study on three visceral interfaces to educate users about their eye-tracking data. Our results suggest that while certain visualizations can be distracting, participants ultimately found them informative and supported the development and availability of such interfaces, even if they are not enabled by default or always enabled. Our research contributes to developing informative interfaces specific to eye tracking that promote transparency and privacy awareness in data collection for VR.

Three screen shots of a VR environment, from left to right first there is an image of the VR art gallery showing a hallway and rooms of art pieces on the wall, second there is an example of the tendril visualizations where a blue arc is drawn behind a gaze cursor on a single art piece in the art gallery, and last an example of the icon visualization in the same scene where floating eye balls are animated by data in the same art gallery scene. The images are labeled from left to right: Art Gallery Environment, Tendril Visual Interface, and Icon Visual Interface.
We evaluated several gaze data visualization techniques designed to make VR users aware of what eye tracking reveals about them. They experienced these visualizations in a VR art gallery, seeing a trail follow their gaze patterns in real-time and also seeing their data animated onto a floating pair of eye balls that "watches them" explore the environment. We found these visualizations raised awareness for privacy by changing their attitudes towards sharing gaze data with different organizations and changed their behavior (i.e., they avoided looking at nude regions in the art pieces when visualizations were enabled).
  • Augmented Reality Visualization Techniques for Attention Guidance to Out-of-View Objects: A Systematic Review

Kelsey Quinn, Joseph L Gabbard

Abstract: Recent advancements in augmented reality (AR) hardware, software, and application capabilities have introduced exciting benefits and advantages, especially in the industrial and occupational fields. However, many new challenges have arisen with the increasing use of AR in real work settings, such as increased visual capture and attention demanded by graphics presented within a relatively small field of view (FOV) (as compared to users’ natural FOV). This systematic review paper addresses how to effectively use and design AR visualization techniques to cue objects located outside a user’s FOV. We posit that new visualization techniques are needed to cue out-of-view objects while maintaining user attention and minimizing distraction from the user’s primary task. A significant amount of research has been done to examine effective visualizations for guiding attention in AR, specifically how to encode the direction and distance of out-of-view objects. Our review compares the performance associated with existing techniques, as well as what characteristics have been implemented and studied. In this work, we also present design guidelines derived from our analysis and synthesis to understand what visualization technique characteristics may be effective. Our final recommendations describe the value of reference lines, feedback to users, location indicators, best practices to encode direction and distance, and the use of subtle cues.

A grid of 15 images depicting various visualization techniques. Each image represents a distinct method for presenting out-of-view objects or contextual information. The images showcase different styles, including bird’s-eye views, radial lights, fisheye projections, arrows, and 3D radar representations, with varied approaches to highlighting targets, spatial awareness, or hidden objects within augmented reality environments.
Visualization techniques: (a) Top-down Bird’s-eye View [2], (b) RadialLight [4], (c) MonoculAR [3], (d) AroundPlot [5], (e) SidebAR [5], (f) Stereographic Fisheye [1], (g) MirrorBall [5], (h) 2D Arrow [8], (i) 3D Arrow [2], (j) 2D Halo [8], (k) EyeSee360 [9], (l) Attention Funnel [11], (m) Wedge [8], (n) SWAVE [6], (o) 3D Radar [5]

Workshop Papers

  • Addressing Human Factors Related to Artificial Intelligence Integrated Visual Cueing (iXR Workshop Paper)

Brendan Kelley, Aditya Raikwar, Ryan P. McMahan, Benjamin A. Clegg, Chris D. Wickens and Francisco R. Ortega

Abstract: A variety of assistive extended reality (XR) visual cueing techniques have been explored over the years. Many of these tools provide significant benefits to tasks such as visual search. However, when the cueing system is erroneous, performance may instead suffer. Factors such as automation bias, where an individual trusts the cueing system despite errors in the cueing, and cognitive overload, where individuals are presented with too much information by the system, may affect task efficacy (i.e., completion time, accuracy, etc.). In some cases, such as with automation bias, these hindrances may be the product of artificial intelligence (AI) integration. Despite this, there may be benefits to using adaptive AI-based cueing systems for XR tasks. However, aspects such as the flow of information, automation accuracy, communication of confidence, or the refusal of output must be considered to build effective AI adaptive cueing systems.

The image contains three panels. The first panel shows a person sitting in an office environment, surrounded by colorful figurines placed on desks and shelves, likely simulating a VR setting. The second panel depicts a virtual aerial view of a landscape, with a yellow navigation arrow. The third panel is a close-up of a virtual character peering through a window, with a red arrow pointing towards it.
Views from studies conducted related to artificial intelligence (AI) and extended reality (XR) cueing.