Virginia Tech® home

CHCI 6th Annual Workshop: Human-Centered AI for Research, Innovation, and Creativity: Creating Connections Across Disciplines

CHCI 6th Annual Workshop

Human-Centered AI for Research, Innovation, and Creativity: Creating Connections Across Disciplines

We invite the Virginia Tech community to join us on March 24-25, 2022, as we explore new connections and create new collaborations across disciplines.

The workshop is free, but space is limited. Please register for the workshop here

The event will be held in hybrid form, both online and at the Virginia Tech Newman Library. For more information, please contact Sara Evers at saralevers@vt.edu

Co-sponsors include the Sanghani Center for Artificial Intelligence & Data Analytics, the Center for Humanities, and the Diggs Teaching Scholar Association. Philip Butler's keynote lecture has been made possible by a generous grant from the Henry Luce Foundation.

About

AI and related technologies (e.g., machine learning, computer vision, natural language processing) can be very powerful for the analysis of large and complex datasets. Simultaneously, what constitutes “data” continues to expand as domain experts, from literature to construction, reimagine their research and creative output. A human-centered design approach can improve the accessibility and usability of AI-powered tools. Empowering teams of domain experts, AI experts, and human-centered design experts to make effective use of these technologies is an area rich in opportunity for collaborative research and design. Already, transdisciplinary teams at Virginia Tech are forging frontiers in AI-empowered, human-centered research, analytics, performance, and design. 

Through the workshop we will explore ways of improving the user experience of AI-powered data analysis, and facilitate transdisciplinary collaborations involving human-centered design, AI, and domain experts. During and after the workshop, we envision that human-centered designers with expertise in HCI, UX, interactive visualization, and the like will be able to work with experts in any domain with complex data analysis needs (e.g., construction, education, intelligence analysis, agriculture, history) to understand their data and questions of interest. At the same time, AI experts will be able to recommend intelligent technologies and approaches that can address these needs. Together, such teams should be able to propose novel, usable, and accessible tools, powered by AI, to solve data analytics problems in a given domain through human-AI collaboration. 

Program

Thursday, March 24, 2022


8am-8:30am
Registration + Coffee/Tea
8:30am-9am
OPENING REMARKS
9am-10am
KEYNOTE: Louis-Philippe Morency
10am-10:15am
Coffee Break
10:15am-11:15am
ROUNDTABLE #1
11:15am-11:45am
PRESENTATION - RESOURCES AVAILABLE
11:45am-1pm
Lunch
1pm-2pm
STUDENT PRESENTATIONS
2pm-2:15pm
Coffee Break
2:15pm-3:15pm
ROUNDTABLE #2
3:15pm-3:30pm
Break
3:30pm-4:15pm
REPORT / WRAP UP
4:15pm-4:45pm
Break
4:45pm-6:45pm
Social at Eastern Divide Brewery

Friday, March 25, 2022


8:30am-9am
Registration + Coffee/Tea
9am-10am
KEYNOTE: Philip Butler
10am-10:15am
Coffee Break
10:15am-11:15am
ROUNDTABLE #3
11:15am-12pm
Report out from participants about collaborations formed
12pm-1pm
Lunch
1pm-2pm
INTERDISCIPLINARY PANEL
2pm-2:15pm
Coffee Break
2:15pm-3:15pm
ROUNDTABLE #4
3:15pm-3:30pm
Break
3:30pm-4:15pm
WRAP UP
4:15pm-4:30pm
CLOSING

A selection of CHCI projects with transdisciplinary teams that include subject area, HCI, and AI domains includes:


Seeing Flavors

The Seeing Flavors project aims to take advantage of more than 6500 online whiskey reviews—a dataset orders of magnitude larger than those usually available in food science—in order to develop practical and interactive new types of “flavor wheels” for whiskeys. One focus was how to identify and label descriptor words - words that represent a sensory property of a whisky, such as “fruity” or “peaty”.

Interface for the Interactive Tagging Tool used to annotate words from our corpus as descriptors or non-descriptors: (A) Non-descriptors (negative examples) are kept in the deletion history. (B) Then the most frequent words (up to 50) are shown in a word cloud format. (C) Confirmed descriptors (ones that are selected by the human operator) are stored in a confirmed terms list. This was used to create a training set to train a deep learning model to identify descriptive language.
t-SNE representation of predicted words from the ‘Seeing Flavors’ whisky corpus as non-descriptors (blue dots) and as descriptors (brown X's). The clustering shows how a trained language model has learned to identify descriptive language.

The American Soldier in World War II

During World War II, the US Army administered more than 200 surveys to over half a million American troops to discover what they thought and how they felt about the conflict and their military service. The surviving collection of studies is now accessible to the public for the first time at The American Soldier in World War II. Developed using AI techniques and Crowdsourcing, visitors to the project website can now browse and search over 65,000 pages of uncensored, open-ended responses handwritten by servicemembers, view and download survey data and original analyses, read topical essays by leading historians, and access additional learning resources.rowse and search over 65,000 pages of uncensored, open-ended responses handwritten by servicemembers, view and download survey data and original analyses, read topical essays by leading historians, and access additional learning resources.

Immersive Space to Think (IST) is a sensemaking approach for immersive environments. Using IST, analysts can organize, annotate, and synthesize findings in 3D immersive space using augmented or virtual reality.
Immersive Space to Think (IST) is a sensemaking approach for immersive environments. Using IST, analysts can organize, annotate, and synthesize findings in 3D immersive space using augmented or virtual reality.
Immersive Space to Think (IST) is a sensemaking approach for immersive environments. Using IST, analysts can organize, annotate, and synthesize findings in 3D immersive space using augmented or virtual reality.
The IST approach also affords the ability to use multimedia to enhance the understanding of analysts. For example, in this picture, the original scanned document can be seen next to its transcription such that the analyst can see any inflections the original author imprinted in their handwriting.
Immersive Space to Think (IST) is a sensemaking approach for immersive environments. Using IST, analysts can organize, annotate, and synthesize findings in 3D immersive space using augmented or virtual reality.
An analyst has sorted numerous documents and annotated them with highlights and a label to categorize them using the IST approach. One possible initial view of the augmented reality IST is a series of categorized documents that are color coded and ready to be organized.

Keynote speakers


Philip Butler

Philip Butler is an Assistant Professor of Theology and Black Posthuman Artificial Intelligence Systems at ILLIF School of Theology Denver. Dr. Butler’s scholarship combines Black liberation theologies, neuroscience, spirituality and technology, particularly artificial intelligence. He is also the founder of The Seekr Project, which is a distinctly Black conversational artificial intelligence with mental health capacities, combining machine learning and psychotherapeutic systems.

Louis-Philippe Morency

Louis-Philippe Morency is the Leonardo Associate Professor of Computer Science in the Language Technologies Institute at Carnegie-Mellon University. His research interests lie at the intersection of machine learning, computer vision, computational linguistics and signal processing — building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions. Dr. Morency leads the Multimodal Communication and Machine Learning Laboratory which aims to build the algorithms and computational foundation to understand the interdependence between human verbal, visual, and vocal behaviors expressed during social communicative interactions.

Panelists


Construction Worker 4.0: Augmenting worker capabilities through Immersive technologies: Nazila Roofigari-Esfahan (Building Construction) and Todd Ogle (University Libraries)

Teaching Interdisciplinary Research Skills: Stephanie (Nikki) Lewis (Honors College) and Anne Brown (Biochemistry)

Characterizing events and human behavior based on AI-aided analysis of data collected from web and social media sources: Ed Fox (Computer Science) and Steven Sheetz (Accounting and Information Systems)

People


Leads
Ed Gitre

Ed Gitre: Assistant Professor, Department of History

Chreston Miller

Chreston Miller: Assistant Professor, University Libraries

Organizing Committee
Doug Bowman

Doug Bowman: Director, CHCI and Professor, Computer Science

Todd Ogle

Todd Ogle: Associate Director, CHCI and Assistant Professor, University Libraries

Andrea Kavanaugh

Andrea Kavanaugh: Associate Director, CHCI, and Senior Research Scientist

Sara Evers

Sara Evers: Graduate Assistant, CHCI and Ph.D. student, School of Education

Drew Loomis

Drew Loomis: Technical Support, Computer Science student