CHCI Research Featured at ISMAR 2024
October 21 - 25, 2024
CHCI faculty and students have made significant contributions at the 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR), which will be held in the Greater Seattle Area, Washington, USA, from October 21 to 25, 2024.
ISMAR is the premier conference for Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR), which attracts the world’s leading researchers from both academia and industry. ISMAR explores the advances in commercial and research activities related to AR, MR, and VR by continuing to expand its scope over the past several years.
Multiple CHCI faculty and students are serving in leadership roles, and CHCI members have contributed to six papers:
Leadership Roles
ISMAR Career Impact Award Committee: Doug Bowman
S&T Programme Committee: Brendan David-John, Lee Lisle, Ryan P. McMahan
Doctoral Consortium: Lee Lisle (Co-Chair), Doug Bowman (Mentor)
Future Faculty Forum: Ryan P. McMahan (Tutorial, Panelist), Doug Bowman (Panelist)
1st Workshop on Intelligent XR: Harnessing AI for Next-Generation XR User Experiences Organizers: Joe Gabbard, Doug Bowman, Shakiba Davari (Alumni), and Shokoufeh Bozorgmehrian
IDEATExR: 4th Workshop on Inclusion, Diversity, Equity, Accessibility, Transparency, and Ethics in XR Organizers: Lee Lisle, Cassidy R. Nelson (Alumni)
Research Paper Presentations
(CHCI members in bold)
Journal Papers
Cultural Reflections in Virtual Reality: The Effects of User Ethnicity in Avatar Matching Experiences on Sense of Embodiment
Tiffany D. Do, Juanita Benjamin, Camille Isabella Protko, and Ryan P. McMahan
Abstract: Matching avatar characteristics to a user can impact the sense of embodiment (SoE) in VR. However, few studies have examined how participant demographics may interact with these matching effects. We recruited a diverse and racially balanced sample of 78 participants to investigate the differences among participant groups when embodying both demographically matched and unmatched avatars. We found that participant ethnicity emerged as a significant factor, with Asian and Black participants reporting lower total SoE compared to Hispanic participants. Furthermore, we found that user ethnicity significantly influences ownership (a subscale of SoE), with Asian and Black participants exhibiting stronger effects of matched avatar ethnicity compared to White participants. Additionally, Hispanic participants showed no significant differences, suggesting complex dynamics in ethnic-racial identity. Our results also reveal significant main effects of matched avatar ethnicity and gender on SoE, indicating the importance of considering these factors in VR experiences. These findings contribute valuable insights into understanding the complex dynamics shaping VR experiences across different demographic groups.
Investigating Object Translation in Room-scale, Handheld Virtual Reality
Daniel Enriquez, Hayoun Moon, Doug A. Bowman, Myounghoon Jeon, Sang Won Lee
Handheld devices have become an inclusive alternative to head-mounted displays in virtual reality (VR) environments, enhancing accessibility and allowing cross-device collaboration. Object manipulation techniques in 3D space with handheld devices, such as those in handheld augmented reality (AR), have been typically evaluated on a tabletop scale, and we currently need to understand how these techniques perform in larger-scale environments. We conducted two studies, each with 30 participants, to investigate how different techniques impact usability and performance for room-scale handheld VR object translations. We compared three translation techniques that are similar to commonly studied techniques in handheld AR: 3DSlide, VirtualGrasp, and Joystick. We also examined the effects of target size, target distance, and user mobility conditions (stationary vs. moving). Results indicated that the Joystick technique, which allowed translation in relation to the user’s perspective, was the fastest and most preferred, without difference in precision. Our findings provide insights for designing room-scale handheld VR systems, with potential implications for mixed reality systems involving handheld devices.
Conference Papers
Cross-Domain Gender Identification Using VR Tracking Data
Qidi J. Wang, Alec G. Moore, Nayan N. Chawla, and Ryan P. McMahan
Abstract: Recently, much work has been done to research the personal identifiability of extended reality (XR) users. Many of these prior studies are task-specific and involve identifying users completing a specific XR task. On the other hand, some studies have been domain-specific and focus on identifying users completing different XR tasks from the same domain, such as watching 360° videos or assembling structures. In this paper, we present one of the few studies investigating cross-domain identification (i.e., identifying users completing XR tasks from different domains). To facilitate our investigation, we used open-source datasets from two different virtual reality (VR) studies—one from an assembly domain and one from a gaming domain—to investigate the feasibility of cross-domain gender identification, as personal identification is not possible between these datasets. The results of our machine learning experiments clearly demonstrate that cross-domain gender identification is more difficult than domain-specific gender identification. Furthermore, our results indicate that head position is important for gender identification and demonstrate that the k-nearest neighbors (kNN) algorithm is not suitable for cross-domain gender identification, which future researchers should be aware of.
Visceral Interfaces for Privacy Awareness of Eye Tracking in VR
G. Nikki Ramirez-Saffy, Pratheep Kumar Chelladurai, Alances Vargas, Syed Ibrahim Mustafa Shah Bukhari, Evan Selinger, Shaun Foster, Brittan Heller, Brendan David-John
Abstract: Eye tracking is increasingly being integrated into virtual reality (VR) devices to support a wide range of applications. It is used as a method of interaction, to support performance optimizations, and to create adaptive training or narrative experiences. However, providing access to eye-tracking data also introduces the ability to monitor user activity, detect and classify a user’s biometric identity, or otherwise reveal sensitive information such as medical conditions. As this technology continues to evolve, users should be made aware of the amount of information they are sharing about themselves to developers and how it can be used. While traditional terms of service may relay this type of information, previous work indicates they do not accessibly convey privacy-related information to users. Considering this problem, we suggest the application of visceral interfaces that are designed to inform users about eye-tracking data within the VR experience. To this end, we designed and conducted a user study on three visceral interfaces to educate users about their eye-tracking data. Our results suggest that while certain visualizations can be distracting, participants ultimately found them informative and supported the development and availability of such interfaces, even if they are not enabled by default or always enabled. Our research contributes to developing informative interfaces specific to eye tracking that promote transparency and privacy awareness in data collection for VR.
Augmented Reality Visualization Techniques for Attention Guidance to Out-of-View Objects: A Systematic Review
Kelsey Quinn, Joseph L Gabbard
Abstract: Recent advancements in augmented reality (AR) hardware, software, and application capabilities have introduced exciting benefits and advantages, especially in the industrial and occupational fields. However, many new challenges have arisen with the increasing use of AR in real work settings, such as increased visual capture and attention demanded by graphics presented within a relatively small field of view (FOV) (as compared to users’ natural FOV). This systematic review paper addresses how to effectively use and design AR visualization techniques to cue objects located outside a user’s FOV. We posit that new visualization techniques are needed to cue out-of-view objects while maintaining user attention and minimizing distraction from the user’s primary task. A significant amount of research has been done to examine effective visualizations for guiding attention in AR, specifically how to encode the direction and distance of out-of-view objects. Our review compares the performance associated with existing techniques, as well as what characteristics have been implemented and studied. In this work, we also present design guidelines derived from our analysis and synthesis to understand what visualization technique characteristics may be effective. Our final recommendations describe the value of reference lines, feedback to users, location indicators, best practices to encode direction and distance, and the use of subtle cues.
Workshop Papers
Addressing Human Factors Related to Artificial Intelligence Integrated Visual Cueing (iXR Workshop Paper)
Brendan Kelley, Aditya Raikwar, Ryan P. McMahan, Benjamin A. Clegg, Chris D. Wickens and Francisco R. Ortega
Abstract: A variety of assistive extended reality (XR) visual cueing techniques have been explored over the years. Many of these tools provide significant benefits to tasks such as visual search. However, when the cueing system is erroneous, performance may instead suffer. Factors such as automation bias, where an individual trusts the cueing system despite errors in the cueing, and cognitive overload, where individuals are presented with too much information by the system, may affect task efficacy (i.e., completion time, accuracy, etc.). In some cases, such as with automation bias, these hindrances may be the product of artificial intelligence (AI) integration. Despite this, there may be benefits to using adaptive AI-based cueing systems for XR tasks. However, aspects such as the flow of information, automation accuracy, communication of confidence, or the refusal of output must be considered to build effective AI adaptive cueing systems.