Virginia Tech® home

Student Research Highlight and Talk: Mark Hollister and Gram Hsing

April 11, 2022

Student Research Highlight and Talk: Mark Hollister and Gram Hsing

Mark Hollister (MS student, ISE) and Gram Hsing (Ph.D. candidate, ISE) will present their research on powered full-body exoskeletons at the Immersive Experiences Research Group meeting this Friday (4/15) at 1 pm in Observe (MAC/ICAT 251) and on zoom (available by subscribing to the CHCI calendar).  

Powered full-body exoskeletons are wearable machines that provide support and assistance to targeted joints. Doing so reduces physical demands and overall metabolic cost to the human body while performing strenuous tasks. These exoskeletons can provide the necessary strength and preserve human skills for industrial needs. However, operating an exoskeleton can lead to new experiences of loading, motion, and balance that is different from past experiences. Mark and Gram are advised by Nathan Lau.

Mark and Gram explore how spatial information should be presented via Augmented Reality (AR) Head Mounted Displays (HMD) to augment situation awareness. Cameras and computer vision map surroundings (such as bystander and hazard location) and help users avoid collision during occupational tasks (Figure 1). These spatial data can then be displayed via AR (Figure 2) as alternative views or perspectives to augment spatial awareness, such as rear view or bird’s eye view perspectives. The aim of their research is to resolve the question of which visualizations are optimal, since haphazard presentation of spatial data may distract users rather than enhance situation awareness for collision avoidance. Their current work tests and compares various display implementations (Figure 3) for their effectiveness in preventing collisions in a simulated warehouse task.

Figure 1. AR system for collision avoidance with visualization concept
Figure 1. AR system for collision avoidance with visualization concept
Figure 2. Current implementation of AR spatial visualizations
Figure 2. Current implementation of AR spatial visualizations
Figure 3. Display, angle, and salience variations for experiment levels
Figure 3. Display, angle, and salience variations for experiment levels