Bowman and North giving Keynotes at ISVC 2022
October 3, 2022
The International Symposium on Visual Computing (ISVC) provides a common forum for researchers, scientists, engineers and practitioners throughout the world to present their latest research findings, ideas, developments and applications in visual computing. Papers highlight contributions to the state of the art and state of the practice in the four central areas of visual computing: computer vision, computer graphics, virtual reality, and visualization. ISVC runs October 3-5, 2022 in San Diego, California.
Doug A. Bowman is the Frank J. Maher Professor of Computer Science and Director of the Center for Human-Computer Interaction at Virginia Tech. He is the principal investigator of the 3D Interaction Group, focusing on user experience and user interface design for virtual reality and augmented reality systems.
Doug Bowman keynote: Designing Augmented Reality for the Future of Work.
Augmented Reality (AR) technology has improved significantly in recent years, to the point where it is expected that major technology companies will release consumer-focused AR glasses in the near future. Technical challenges in optics, power, and tracking remain, but are solvable. But what will we use these AR glasses for, and how will they provide value? In this talk, I will argue that some of the most impactful applications of future AR glasses will be those that transform the way we work. Using examples from my research on AR for knowledge work and intelligent AR for construction work, I will explain why user experience considerations are crucial to the adoption of AR for future work. Studying the design of these applications today will lead to guidelines that can help ensure the success of AR for the future of work tomorrow.
Chris North is a Professor of Computer Science at Virginia Tech in Blacksburg, VA. He is Associate Director of the Sanghani Center for AI and Data Analytics, and he leads the Visual Analytics research group. His research and education agenda seek to enable effective human-AI interaction for big data analysis.
Human-AI interaction plays a crucial role in visual analytics, enabling analysts to use AI to help analyze data. In support of this goal, explainable-AI visualizations seek to unmask the underlying details of black box AI learning algorithms, enabling human analysts to understand algorithmic state and results. However, to truly enable human-AI interaction, we will argue that there exists a second black box representing the cognitive process of the user, containing information which must be communicated to the algorithm. Using this “Two Black Boxes” problem as motivation, we propose a design philosophy for human-AI interaction. We discuss usability challenges associated with each phase of communication between the pair of cooperatively-learning entities and the benefits that emerge from opening the black boxes of human and AI for data analysis tasks.