Congratulations Brendan David-John and Bo Ji on new awards!
May 15, 2023
Brendan David-John and Bo Ji received funding for research on three interrelated projects: "Protecting Bystander Visual Data in Augmented Reality Systems" and "Proposal Team Building and Privacy Expert Recruitment for a Workshop on Bystander Obscuration in Wearable Augmented Reality Displays" with funding from the Commonwealth Cyber Initiative (CCI) and "Prototype Refinement and Customer Discovery for Bystander Privacy in Augmented Reality (AR) Systems” with funding from the VT LAUNCH Proof-Of-Concept (POC) grant program. Matthew Corbett, Ji’s Ph.D. student and a CHCI student member, is also a team member of these projects.
One of the CCI funded projects will facilitate preliminary work to further evaluate the existing prototype, including studies to understand how bystanders perceive benefits of the system. The other CCI project will involve conducting a workshop with privacy experts and building a proposal team for a large-scale grant proposal. The projects will also study the cybersecurity market and the legal impact of AR products, which enables the development of large external grant proposals. The project funded by the Proof-Of-Concept (POC) grant program will help the team learn how to perform customer discovery and interviews through an NSF regional I-Corps Short Course, attend relevant technology trade-shows, allow them to implement their proposed system on additional AR devices, and measure the impact on a wider range of AR applications to demonstrate feasibility of the proposed system as a marketable product.
The Commonwealth Cyber Initiative (CCI) is Virginia’s main access point for cybersecurity research, innovation, workforce development, and news. In this community, researchers find funding and collaboration, students discover diverse career possibilities, and new innovations come to life. The vision of CCI is to establish Virginia as a global center of excellence in cybersecurity research and serve as a catalyst for the commonwealth's economic diversification and long-term leadership in this sector. Its mission is to serve as an engine for research, workforce development, and innovation at the intersection between cybersecurity, autonomous systems, and intelligence.
The VT LAUNCH Proof-Of-Concept (POC) Grant Program combines early-stage commercialization grants and complementary resources to assist Virginia Tech researchers who want to increase the impact of their research and actively pursue the potential commercialization of technologies emerging from their labs. The POC Program provides competitive grants up to $50k and associated resources support these early stage activities. The POC grant program is offered by LAUNCH, The Center for New Ventures with grant funding administered by Virginia Tech Intellectual Properties (VTIP).
David-John and Ji’s interrelated research projects can be summarized as follows:
Augmented Reality (AR) devices are distinct from other mobile devices by the immersive experience they offer. While the powerful suite of sensors on modern AR devices are necessary for enabling such an immersive experience, they can create unease in bystanders (i.e., those surrounding the device during its use) due to potential bystander data leak, which is called the bystander privacy problem.
In this project, the researchers plan to design, develop, and prototype BystandAR, the first practical bystander privacy protection (BPP) system that will build on a key insight: the device user's eye gaze and voice is a highly effective indicator for subject/bystander detection in an interpersonal interaction. It will leverage novel AR capabilities such as eye gaze tracking, wearer-focused microphone, and spatial awareness to achieve a usable frame rate without offloading sensitive information.
BystandAR is a privacy-preserving API that sanitizes bystander information from sensor data streams before they are accessed by third-party applications. At a high level, it modifies how third-party applications access raw visual data, identifies and obscures the bystander's faces, and passes on the obscured frames to the application.
Brendan David-John also received funding for "Personalizing machine-learning guided navigation interfaces in Virtual and Augmented Reality" from 4-VA@Virginia Tech Pre-Tenure Faculty Research Program to fund a collaborative research project with Dr. Craig Yu from George Mason University.
4-VA is a partnership between eight Virginia universities, including Virginia Tech, created to foster collaborations that leverage the strengths of each university and improve efficiencies in education across the commonwealth. 4-VA’s unprecedented alliances between schools, departments, faculty, and students generate significant, innovative solutions to educational and real-world challenges. Through each university’s RFP process, faculty submit requests for pilot research projects to generate initial data and create connections with faculty at partner schools. This seed funding is designed to springboard the research for future external funding such as the National Science Foundation and the National Institutes of Health.
"Personalizing machine-learning guided navigation interfaces in Virtual and Augmented Reality"
This research project investigates individual users’ preferences for how sensitive, or how frequently, a machine learning (ML) interface should intervene to provide assistance during a virtual reality (VR) task. Prior work has established that eye tracking in VR provides a data stream that is capable of accurately predicting when a user needs navigation assistance in VR 99% of the time using deep ML models. However, understanding user preferences for how often the machine-learning interface should intervene is still an open research question. Furthermore, improving the generalizability of the trained prediction model is an open research direction, as the existing model performed worse when deployed in a different virtual scene.
ML models may generalize better when personalized to each individual’s behavior. By exploring personalization methods, we expect to learn how our models generalize between virtual and augmented reality (AR) navigation that blends virtual content with the real world.