Virginia Tech® home

CHCI research showcased at ISMAR 2023

October 16, 2023

CHCI faculty and students have significant contributions at the the 22nd IEEE International Symposium on Mixed and Augmented Reality (ISMAR), which will be held from October 16 to 20, 2023 in Sydney, Australia.

IEEE ISMAR is the premier conference for Augmented Reality (AR), Mixed Reality (MR) and Virtual Reality (VR), which attracts the world’s leading researchers from both academia and industry. ISMAR explores the advances in commercial and research activities related to AR, MR, and VR by continuing the expansion of its scope over the past several years. 

Multiple CHCI faculty and students are serving in leadership roles, and CHCI members have contributed to eight research papers.

ISMAR 23 SYD

Leadership Roles

  • Joe Gabbard - Journal Paper S&T Programme Chair
  • Brendan David-John - Conference Paper Committee
  • Cassidy R. Nelson - XR Future Faculty Forum (F3) Chair, IDEATExR Workshop Organizer
  • Nayara Faria - Inclusion, Diversity, Equity & Accessibility Chair, IDEATExR Workshop Organizer
  • Yalong Yang (now at Georgia Tech) - Online Experience Chair
  • Jerald Thomas (now at University of Wisconsin-Milwaukee) - Journal Paper Committee, Conference Paper Committee
  • Wallace Lages (now at Northwestern University) - Pitch Your Lab Chair

Research Paper Presentations

(presenting author in italic; CHCI members in bold):

AMP-IT and WISDOM: Improving 3D Manipulation for High-Precision Tasks in Virtual Reality

Francielly Rodrigues, National Laboratory for Scientific Computing – LNCC; Alexander Giovannelli, Virginia Tech; Leonardo Pavanatto, Virginia Tech; Haichao Miao, Lawrence Livermore National Laboratory; Jauvane C. de Oliveira, National Laboratory for Scientific Computing; Doug Bowman, Virginia Tech

Precise 3D manipulation in virtual reality (VR) is essential for effectively aligning virtual objects. However, state-of-the-art VR manipulation techniques have limitations when high levels of precision are required, including the unnaturalness caused by scaled rotations and the increase in time due to degree-of-freedom (DoF) separation in complex tasks. We designed two novel techniques to address these issues: AMP-IT, which offers direct manipulation with an adaptive scaled mapping for implicit DoF separation, and WISDOM, which offers a combination of Simple Virtual Hand and scaled indirect manipulation with explicit DoF separation. We compared these two techniques against baseline and state-of-the-art manipulation techniques in a controlled experiment. Results indicate that WISDOM and AMP-IT have significant advantages over best-practice techniques regarding task performance, usability, and user preference.


Spaces to Think: A Comparison of Small, Large, and Immersive Displays for the Sensemaking Process

Lee Lisle, Virginia Tech (alum); Kylie Davidson, Virginia Tech; Leonardo Pavanatto, Virginia Tech; Ibrahim Tahmid, Virginia Tech; Chris North, Virginia Tech; Doug Bowman, Virginia Tech

Analysts need to process large amounts of data in order to extract concepts, themes, and plans of action based upon their findings. Different display technologies offer varying levels of space and interaction methods that change the way users can process data using them. In a comparative study, we investigated how the use of a single traditional monitor, a large, high-resolution two-dimensional monitor, and immersive three-dimensional space using the Immersive Space to Think approach impact the sensemaking process. We found that user satisfaction grows and frustration decreases as available space increases. We observed specific strategies users employ in the various conditions to assist with the processing of datasets. We also found an increased usage of spatial memory as space increased, which increases performance in artifact position recall tasks. In future systems supporting sensemaking, we recommend using display technologies that provide users with large amounts of space to organize information and analysis artifacts.


Evaluating the Feasibility of Predicting Information Relevance During Sensemaking with Eye Gaze Data

Ibrahim Tahmid, Virginia Tech; Lee Lisle, Virginia Tech; Kylie Davidson, Virginia Tech; Kirsten Whitley, US Government; Chris North, Virginia Tech; Doug Bowman, Virginia Tech.

There are two panels, in the left panel is a photo of a person standing with their back to us at a podium. The person is wearing an AR headset, and floating around them are simulated blue boxes with white text.   In the right panel is a photograph of a persons hand next to a simulated remote control, as one would view it in AR. The remote has three interactive buttons labeled 'Note', 'Label' and 'Search'.

Eye gaze patterns vary based on reading purpose and complexity, and can provide insights into a reader’s perception of the content. We hypothesize that during a complex sensemaking task with many text-based documents, we will be able to use eye-tracking data to predict the importance of documents and words, which could be the basis for intelligent suggestions made by the system to an analyst. We introduce a novel eye-gaze metric called `GazeScore’ that predicts an analyst’s perception of the relevance of each document and word when they perform a sensemaking task. We conducted a user study to assess the effectiveness of this metric and found strong evidence that documents and words with high GazeScores are perceived as more relevant, while those with low GazeScores were considered less relevant. We explore potential real-time applications of this metric to facilitate immersive sensemaking tasks by offering relevant suggestions.


Uncovering Best Practices in Immersive Space to Think

Kylie Davidson, Virginia Tech; Lee Lisle, Virginia Tech; Ibrahim Tahmid, Virginia Tech; Kirsten Whitley, US Government; Chris North, Virginia Tech; Doug Bowman, Virginia Tech.

There are three panels all showing the same VR scene from above. There are simulated notes shown in various positions. The scenes are showing how these notes can be arranged in a 360 degree space

As immersive analytics research becomes more popular, user studies have been aimed at evaluating the strategies and layouts of users’ sensemaking during a single focused analysis task. However, approaches to sensemaking strategies and layouts are likely to change as users become more familiar/proficient with the immersive analytics tool. In our work, we build upon an existing immersive analytics approach–Immersive Space to Think–to understand how schemas and strategies for sensemaking change across multiple analysis tasks. We conducted a user study with 14 participants who completed three different sensemaking tasks during three separate sessions. We found significant differences in the use of space and strategies for sensemaking across these sessions and correlations between participants’ strategies and the quality of their sensemaking. Using these findings, we propose guidelines for effective analysis approaches within immersive analytics systems for document-based sensemaking.


Exploring the Evolution of Sensemaking Strategies in Immersive Space to Think

Kylie Davidson, Virginia Tech; Lee Lisle, Virginia Tech; Kirsten Whitley, US Government; Doug A. Bowman, Virginia Tech; Chris North, Virginia Tech.

There are six panels all showing the same VR scene from the front and from above. There are simulated notes shown in various positions. The scenes are showing how these notes can be arranged in a 360 degree space.

Existing research on immersive analytics to support the sensemaking process focuses on single-session sensemaking tasks. However, in the wild, sensemaking can take days or months to complete. In order to understand the full benefits of immersive analytic systems, we need to understand how immersive analytic systems provide flexibility for the dynamic nature of the sensemaking process. In our work, we build upon an existing immersive analytic system – Immersive Space to Think, to evaluate how immersive analytic systems can support sensemaking tasks over time. We conducted a user study with eight participants with three separate analysis sessions each. We found significant differences between analysis strategies between sessions one, two, and three, which suggest that immersive space to think can benefit analysts during multiple stages in the sensemaking process.


Augmented Reality Rehabilitative and Exercise Games (ARREGs): A Systematic Review and Future Considerations

Cassidy R. Nelson, Virginia Tech; Joseph L Gabbard, Virginia Tech.

Considerations for Future ARREGs*

  • Powering up Therapeutic Play Through Purposeful Development
  • Choose your Character! Selecting the Appropriate AR Modality
  • Diving into Diversity - The Power of Inclusion
  • Assessing ARREGs for GAINS and GRINS
  • Game On! Leveling Up Beyond Points and Timers
  • Navigating New Realms of ARREG Potential
  • Gaming for Gains - Seamless Measurement
  • Mind the Gap! Consider Underexplored ARREG Areas 
  • Ink the Details

*Informed by sorting 675 articles, 25 included in final synthesi

Augmented Reality (AR) and exergames have been trending areas of interest in healthcare spaces for rehabilitation and exercise. This work reviews 25 papers featuring AR rehabilitative/exercise games and paints a picture of the literature landscape. The included results span twelve years, with the oldest paper published in 2010 and the most recent work in 2022. More specifically, this work contributes a bank of representative ARREGs and a synthesis of measurement strategies for player perceptions of Augmented Reality Rehabilitative and Exercise Game (ARREG) experiences, the elements that comprise the exergame experience, the intended use cases of ARREGs, whether participants are actually representative users, the utilized devices and AR modalities, the measures used to capture rehabilitative success, and the measures used to capture participant perceptions. Informed by the literature body, our most significant contribution is nine considerations for future ARREG development.


Gestures vs. Emojis: Comparing Non-Verbal Reaction Visualizations for Immersive Collaboration

Alexander Giovannelli, Virginia Tech; Jerald Thomas, Virginia Tech; Logan Lane, Virginia Tech; Francielly Rodrigues, Virginia Tech; Doug Bowman, Virginia Tech.

A view of a simulated person standing facing us. They are wearing a pink shirt and bluejeans. There is simulated sign in front of them that reads: "OK, Ben, pay attention. I've brought you to the Library of Congress. Why? Because it's the biggest library in the world. Over 20 million books. And they're all saying the same exact thing. Listen to Riley. What we have here, my friend is an entire layout of the archives. Short of builders' blueprints. You've got construction orders, phone lines, water, and sewage - It's all here. Now, when the declaration is on display. OK, it is surrounded by guards and video monitors and a little family from Iowa and little kids on their eighth-grade field trip. And beneath an inch of bulletproof glass is an army of sensors and heat monitors that will go off if someone gets too close with high fever. Now, when it's not on display, it is lowered into a four-foot-thick concrete, steel-plated vault... that happens to be equipped with an electronic combination lock and biometric access-denial systems."

Collaborative virtual environments afford new capabilities in telepresence applications, allowing participants to co-inhabit an environment to interact while being embodied via avatars. However, shared content within these environments often takes away the attention of collaborators from observing the non-verbal cues conveyed by their peers, resulting in less effective communication. Exaggerated gestures, abstract visuals, as well as a combination of the two, have the potential to improve the effectiveness of communication within these environments in comparison to familiar, natural non-verbal visualizations. We designed and conducted a user study where we evaluated the impact of these different non-verbal visualizations on users’ identification time, understanding, and perception. We found that exaggerated gestures generally perform better than non-exaggerated gestures, abstract visuals are an effective means to convey intentional reactions, and the combination of gestures with abstract visuals provides some benefits compared to their standalone counterparts.


Multi-Focus Querying of the Human Genome Information on Desktop and in Virtual Reality: an Evaluation

Gunnar William Reiske, Virginia Tech; Sungwon In, Virginia Tech; Yalong Yang, Georgia Institute of Technology (former CS/CHCI faculty).

A simulated scene with a person standing with their back to us. They are wearing a white shirt, blue pants and brown shoes. In front of them are four panels, the information is not legible.

The human genome is incredibly information-rich, consisting of approximately 25,000 protein-coding genes spread out over 3.2 billion nucleotide base pairs contained within 24 unique chromosomes. The genome is critically important in maintaining spatial context, which assists in understanding gene interactions and relationships. However, existing methods of genome visualization that utilize spatial awareness are inefficient and prone to limitations in presenting gene information and spatial context. This study proposed an innovative approach to genome visualization and exploration utilizing virtual reality. To determine the optimal placement of gene information and evaluate its essentiality in a VR environment, we implemented and conducted a user study with three different interaction methods. Two interaction methods were developed in virtual reality to determine if gene information is better suited to be embedded within the chromosome ideogram or separate from the ideogram. The final ideogram interaction method was performed on a desktop and served as a benchmark to evaluate the potential benefits associated with the use of VR. Our study findings reveal a preference for VR, despite longer task completion times. In addition, the placement of gene information within the visualization had a notable impact on the ability of a user to complete tasks. Specifically, gene information embedded within the chromosome ideogram was better suited for single target identification and summarization tasks, while separating gene information from the ideogram better supported region comparison tasks.

Design Competition Entry

CoLT: Enhancing Collaborative Literature Review Tasks with Synchronous and Asynchronous Awareness Across the Reality-Virtuality Continuum

Ibrahim Tahmid, Virginia Tech; Francielly Rodrigues, Virginia Tech; Alexander Giovannelli, Virginia Tech; Lee Lisle, Virginia Tech; Jerald Thomas, Virginia Tech; Doug Bowman, Virginia Tech.

A person wearing a white shirt and bluejeans is seen from the side wearing an AR headset and clicking on one of 13 simulated notes floating around them. The notes are teal with black text which is not legible.

Collaboration plays a vital role in both academia and industry whenever we need to browse through a big amount of data to extract meaningful insights. These collaborations often involve people living far from each other, with different levels of access to technology. Effective cross-border collaborations require reliable telepresence systems that provide support for communication, cooperation, and understanding of contextual cues. In the context of collaborative academic writing, while immersive technologies offer novel ways to enhance collaboration and enable efficient information exchange in a shared workspace, traditional devices such as laptops still offer better readability for longer articles. We propose the design of a hybrid cross-reality cross-device networked system that allows the users to harness the advantages of both worlds. Our system allows users to import documents from their personal computers (PC) to an immersive headset, facilitating document sharing and simultaneous collaboration with both colocated colleagues and remote colleagues. Our system also enables a user to seamlessly transition between Virtual Reality, Augmented Reality, and the traditional PC environment, all within a shared workspace. We present the real-world scenario of a global academic team conducting a comprehensive literature review, demonstrating its potential for enhancing cross-reality hybrid collaboration and productivity.