Virginia Tech® home

ACM Conference on Human Factors in Computing Systems (CHI)

May 1, 2024

Multiple CHCI members, including Eugenia Rho, Chris North, Sang Won Lee, Yan Chen, Kurt Luther, Yaxing Yao, and Denis Gracanin, have contributed research, workshop papers, and demos to CHI this year. CHCI affiliates are listed below in bold.

The ACM CHI Conference on Human Factors in Computing Systems is a premier international Human-Computer Interaction (HCI) conference. CHI – pronounced ‘kai’ – annually brings together researchers and practitioners from all over the world and from diverse cultures, backgrounds, and positionalities who aim to improve the world with interactive digital technologies. 

This year, the conference embraces the theme of “Surfing the World,” which focuses on pushing forth the wave of cutting-edge technology and riding the tide of new developments in human-computer interaction. CHI will take place in Honolulu, Hawaiʻi, USA, from May 11-16, 2024, while supporting remote attendance.

The image is a simplistic and stylized line drawing banner for "CHI 2024: Surfing the World," dated 11-16 May 2024. On the left side, there's a symbol that appears to be a stylized knot or emblem, followed by "CHI 2024" in bold letters. The main graphic depicts a serene ocean scene with waves represented by horizontal wavy lines. Three palm trees with simple trunks and leaves, a standing surfboard, and two circular shapes, possibly representing the sun and the moon, are illustrated above the waves. To the right, "HAWAII" is spelled out with vertical lines resembling stylized text, partially submerged in the 'water' of the wave lines beneath. The color scheme is monochromatic, with blue lines creating the illustrations on a white background.

Papers

Xiaohan Ding, Buse Carik, Uma Sushmitha Gunturi, Valerie Reyna, and Eugenia Rho

Leveraging Prompt-Based Large Language Models: Predicting Pandemic Health Decisions and Outcomes Through Social Media Language

Abstract: We introduce a multi-step reasoning framework using prompt-based LLMs to examine the relationship between social media language patterns and trends in national health outcomes. Grounded in fuzzy-trace theory, which emphasizes the importance of “gists” of causal coherence in effective health communication, we introduce Role-Based Incremental Coaching (RBIC), a prompt-based LLM framework, to identify gists at scale. Using RBIC, we systematically extract gists from subreddit discussions opposing COVID-19 health measures (Study 1). We then track how these gists evolve across key events (Study 2) and assess their influence on online engagement (Study 3). Finally, we investigate how the volume of gists is associated with national health trends like vaccine uptake and hospitalizations (Study 4). Our work is the first to empirically link social media linguistic patterns to real-world public health trends, highlighting the potential of prompt-based LLMs in identifying critical online discussion patterns that can form the basis of public health communication strategies.

The image shows a graphical representation of a three-step process for data analysis: A. Data Collection: It illustrates the collection of data from social media sources subreddits explicitly like r/Covid-19Mask and r/DebateVaccines. There are snippets of posts, such as one saying, "They canceled my membership against my will today because I refused to wear a mask." B. Gist Generation Using RBIC Framework with GPT-4: This step involves generating the gist of the collected data. The Role-based Knowledge Generation and Incremental Coaching features appear to be part of the process. There's a portion of text that suggests the output is the gist, like "The output which is the GIST... The cause of canceling the membership was the refusal of the person to wear a mask." C. Granger Causality: The final step involves testing for Granger causality, which is a statistical hypothesis test for determining if a one-time series is useful in forecasting another. The diagram shows different clusters, possibly of data or outcomes, labeled from 1 to 6. Two types of datasets are indicated for analysis: User Behavior Dataset and Health Outcome Dataset. The image overall depicts a structured approach to using GPT-4 for extracting key information from data sources and analyzing the relationship between different variables or datasets, likely in the context of COVID-19 and health measures such as mask-wearing.
We introduce Role-Based Incremental Coaching (RBIC), a Large Language Model (LLM) prompting framework. (A) Collecting Reddit datasets focused on communities known for opposing COVID-19 health practices. (B) Guided by Fuzzy-Trace Theory, we introduce an LLM framework called Role-Based Incremental Coaching (RBIC) to extract cause-effect pairs and formulate gists capturing causal relations in texts. (C) Granger-causality tests and data analytics reveal the impact of these gists on community engagement and national health outcomes.

Sungwon In, Eric Kroks, Kirsten Whitley, Chris North, and Yalong Yang (former VT/CS/CHCI)

Evaluating Navigation and Comparison Performance of Computational Notebooks on Desktop and in Virtual Reality

Abstract: The computational notebook serves as a versatile tool for data analysis. However, its conventional user interface falls short of keeping pace with the ever-growing data-related tasks, signaling the need for novel approaches. With the rapid development of interaction techniques and computing environments, there is a growing interest in integrating emerging technologies in data-driven workflows. Virtual reality, in particular, has demonstrated its potential in interactive data visualizations. In this work, we aimed to experiment with adapting computational notebooks into VR and verify the potential benefits VR can bring. We focus on the navigation and comparison aspects as they are primitive components in analysts' workflow. We have designed and implemented a Branching & Merging functionality to improve comparison further. We tested computational notebooks on the desktop and VR, both with and without the added Branching & Merging capability. We found VR significantly facilitated navigation compared to desktop, and the ability to create branches enhanced comparison.

A showcase of the immersive computational notebook system highlights its branching functionalities. Users have the ability to interact with windows comprising multiple cells, which are interlinked to signify the sequence of execution. It illustrates branches that have been initiated at various points within the notebook, aimed at evaluating multi-level hypotheses. Multiple colors were used to indicate each window in the branch containing a different kernel. Following the creation of these branches, they can be merged to facilitate streamlined comparisons of results.
Showcase of the immersive computational notebook system

Donghan Hu, Sang Won Lee

Title: Exploring the Effectiveness of Time-Lapse Screen Recordings for Self-Reflection in Work Contexts

Abstract: Effective self-tracking in working contexts empowers individuals to explore and reflect on past activities. Recordings of computer activities contain rich metadata that can offer valuable insight into users' previous tasks and endeavors. However, presenting a simple summary of time usage may not effectively engage users with data because it is not contextualized, and users may not understand what to do with the data. This work explores time-lapse videos as a visual-temporal medium to facilitate self-reflection among workers in productivity contexts. To explore this space, we conducted a four-week study (n=15) to investigate how a computer screen's history of states can help workers recall previous undertakings and gain comprehensive insights via self-reflection. Our results support that watching time-lapse videos can enhance self-reflection more effectively than traditional self-tracking tools by providing contextual clues about users' past activities. The experience with both traditional tools and time-lapse videos resulted in increased productivity. Additionally, time-lapse videos assist users in cultivating a positive understanding of their work. We discuss how multimodal cues, such as time-lapse videos, can complement personal informatics tools.

This image shows the workflow of the user study, including a pre-study survey, setting-up session, two conditions with RescueTime or Time-lapse Videos, and a post-study interview. One condition is a baseline condition, and the other one is a treatment condition. Note that the order of the two conditions was randomized.
The full procedure of the user study, comprising an initial session, field study, and post-study interview. Note that the order of the two conditions was randomized.

Kenan Kamel A Alghythee, Adel Hrncic, Karthik Singh, Sumanth Kunisetty, Yaxing Yao,  Nikita Soni

Title: Towards Understanding Family Privacy and Security Literacy Conversations at Home: Design Implications for Privacy Literacy Interfaces

Abstract: Policymakers and researchers have emphasized the crucial role of parent-child conversations in shaping children's digital privacy and security literacy. Despite this emphasis, little is known about the current nature of these parent-child conversations, including their content, structure, and children's engagement during these conversations. This paper presents the findings of an interview study involving 13 parents of children ages under 13 reflecting on their privacy literacy practices at home. Through qualitative thematic analysis, we identify five categories of parent-child privacy and security conversations and examine parents' perceptions of their children's engagement during these discussions. Our findings show that although parents used different conversation approaches, rule-based conversations were one of the most common approaches taken by our participants, with example-based conversations perceived to be effective by parents. We propose important design implications for developing effective privacy educational technologies for families to support parent-child conversations.

Chaoran Chen, Weijun Li, Wenxin Song, Yanfang Ye, Yaxing Yao, Toby Jia-Jun Li

Title: An Empathy-Based Sandbox Approach to Bridge the Privacy Gap among Attitudes, Goals, Knowledge, and Behaviors

Abstract: Managing privacy to reach privacy goals is challenging, as evidenced by the privacy attitude-behavior gap. Mitigating this discrepancy requires solutions that account for both system opaqueness and users' hesitations in testing different privacy settings due to fears of unintended data exposure. We introduce an empathy-based approach that allows users to experience how privacy attributes may alter system outcomes in a risk-free sandbox environment from the perspective of artificially generated personas. To generate realistic personas, we introduce a novel pipeline that augments the outputs of large language models (e.g., GPT-4) using few-shot learning, contextualization, and chain of thoughts. Our empirical studies demonstrated the adequate quality of generated personas and highlighted the changes in privacy-related applications (e.g., online advertising) caused by different personas. Furthermore, users demonstrated cognitive and emotional empathy towards the personas when interacting with our sandbox. We offered design implications for downstream applications in improving user privacy literacy.

The image is a flowchart with a black background and yellow borders, illustrating the process of personal data management and its use in system personalization. It begins with the 'User' on the left, which is connected to the first step labeled 'Persona generation' under the header 'Empathize'. This leads to 'Persona', representing a set of user-like data. Below this process is a note saying 'Experience the influence of persona's privacy data on system outcomes to acquire privacy knowledge'. The 'Persona' then feeds into 'Personal data replacement', indicating the use of the persona data to influence privacy-sensitive applications. This process points to 'System', symbolized by a square. Below the 'System', there is an arrow pointing back to 'Persona' labeled 'Recommend', with a note 'System outcomes (e.g., ads)'. This represents the feedback loop of the system's output based on the persona data, which includes personalized content like advertisements.
An empathy-based approach where users interact with online services by using the identity of different personas in a risk-free sandbox without leaking their real personal data.

Lanjing Liu, Chao Zhang, Zhicong Lu

Title: Wrist-bound Guanxi, Jiazu, and Kuolie: Unpacking Chinese Adolescent Smartwatch-Mediated Socialization

Abstract: Adolescent peer relationships, essential for their development, are increasingly mediated by digital technologies. As this trend continues, wearable devices, especially smartwatches tailored for adolescents, are reshaping their socialization. In China, smartwatches like XTC have gained wide popularity, introducing unique features such as "Bump-to-Connect" and exclusive social platforms. Nonetheless, how these devices influence adolescents' peer experience remains unknown. Addressing this, we interviewed 18 Chinese adolescents (age: 11 -- 16), discovering a smartwatch-mediated social ecosystem. Our findings highlight the ice-breaking role of smartwatches in friendship initiation and their use for secret messaging with local peers. Within the online smartwatch community, peer status is determined by likes and visibility, leading to diverse pursuit activities (i.e., chu guanxi, jiazu, kuolie) and negative social dynamics. We discuss the core affordances of smartwatches and Chinese cultural factors that influence adolescent social behavior and offer implications for designing future wearables that responsibly and safely support adolescent socialization.

The image is a colorful diagram depicting the social dynamics involved with local and online peer interaction among adolescents using smartwatches and phone-based social platforms. On the left, under "SOCIALIZE WITH LOCAL PEERS," there are two green blocks indicating "Smartwatch-Facilitated Friendship Initiation" and "Smartwatch-Mediated Peer Communication." These flow into a central blue block "Adolescents with Smartwatches" which is connected to the motivation to buy.  On the right, under "SOCIALIZE WITH ONLINE PEERS," an equation formed by orange blocks shows that "Likes" plus "Visibility" equals "Peer Status," which is linked to "Privilege." Below the equation are three brown blocks with Chinese characters representing different statuses or groups: "Chu Gaoxi" (初高兴), "Jiazui" (家族), and "Kuolie" (哭泣).  Below the central section, there is a horizontal purple line labeled "NEGATIVE SOCIAL DYNAMICS" with blocks denoting "Discrimination," "Drama," "Cyberbullying," and "Flame War." This line is connected to the bottom section labeled "PHONE-BASED SOCIAL PLATFORMS" with icons for "IM Platforms" and "Social Media."  The flow of the diagram suggests that the use of smartwatches for socializing can motivate purchases and influence online peer status, but also that there are negative social dynamics such as discrimination and cyberbullying associated with both local and online socialization through these devices.
The social ecosystem of Smartwatch-Mediated Socialization, encompassing socialization with local peers, socialization with online peers, and negative social dynamics

Workshops Organized by CHCI affiliates

Workshop on Human-Notebook Interactions

Organizers: Jesse Harden, Rebecca Faust, Katherine E. Isaacs, Nurit Kirshenbaum, Chris North, April Wang, John Wenskovitch, and Jian Zhao.

Abstract - Computational notebooks have become an ubiquitous medium for much data science work. At their best, computational notebooks actualize Knuth's concept of literate programming with their ability to weave code, text, and outputs, such as visuals, into a computational narrative. These notebooks support incremental and iterative analysis, explanation of an analyst's thoughts and processes, and sharing of code, text, and visuals in one document.

However, notebook systems are in their infancy in terms of user interface and interaction design. Recent research describes several pain points users of computational notebooks have, including effectively managing more complex, non-linear data science and analysis workflows, messiness, version control, debugging issues, making the most of larger, widescreen displays, and more.

In this workshop, we seek to explore, catalogue, and innovate with respect to issues of user interface and interaction design of computational notebooks.

Workshop on Sustaining Scalable Sustainability: Human-Centered Green Technology for Community-wide Carbon Reduction

Organizers: Vikram Mohanty (VT CS/CHCI alum), among others. 

Abstract: Global CO₂ emissions are on the rise, driven by transportation, agriculture, and energy production activities. While corporations and policymakers play a crucial role in reducing greenhouse gas emissions, our workshop pivots the focus towards individuals and communities as essential agents of change. Individual consumers and private households, responsible for a significant portion of global emissions, often underestimate their impact on climate change and potential for mitigation. This workshop aims to address these gaps by encouraging carbon reduction, promoting carbon literacy and empowering local communities to take meaningful action.

Workshop Papers

Matthew Wilchek, Linhan Wang, Feras A. Batarseh, and Kurt Luther

KHAIT: K-9 Handler Artificial Intelligence Teaming for Collaborative Sensemaking

Abstract: Following natural and manmade disasters, urban search and rescue (USAR) involving canines (i.e., K-9) face time delays and communication hurdles. This paper proposes a method to improve USAR sensemaking by incorporating object detection Artificial Intelligence (AI) and Augmented Reality (AR), aiming to close the communication gap between USAR animals and their handlers. SAR dogs are equipped with an AI-powered camera and edge computing hardware, while their handlers wear HoloLens-2 AR headsets. This setup allows for advanced object detection from the dogs' perspective, with captured data transmitted to handlers' headsets as hologram indicators to enhance survivor localization. This multidisciplinary strategy, merging human-in-the-loop (HITL), canine, and AI capabilities, aims to transform SAR missions, ensuring faster and safer rescues. At the CHI sensemaking workshop, we hope to solicit feedback regarding our proposed design and evaluation that will have broad relevance to attendees interested in human-AI and animal-computer interaction, AR, and/or disaster response.

his image shows an indoor scene with a dog lying in a metal crate on the left. The crate is covered with a blue blanket on top, and inside, there is a light blue cloth with orange trim where the dog is resting. The dog appears to be wearing a red harness. In the background, there is a dining area with a wooden table covered by a cream tablecloth and surrounded by chairs. On the right, a computer monitor displays an image of a person in a virtual environment, suggesting someone is possibly engaged in a virtual meeting or playing a game. The room has white walls, and there's a closed door on the far left.
Photograph of the initial KHAIT prototype

Poorvesh Dongre, Majid Behravan, Kunal Gupta, Mark Billinghurst, and Denis Gracanin

Title: Integrating Physiological Data with Large Language Models for Empathic Human-AI Interaction. PhysioCHI24 Workshop

Abstract: This paper explores enhancing empathy in Large Language Models (LLMs) by integrating them with physiological data that can be used to interpret users' mental and emotional states. We propose a multi-faceted approach: (1) constructing evolving user profiles using real-time physiological data, (2) developing deep learning models that use physiological data for recognizing mental and emotional states, and (3) integrating the predictive models with LLMs for empathic interaction. We showcase the application of this approach in an Empathic LLM (EmLLM) chatbot for stress monitoring and control. We also discuss the results of a pilot study that evaluates this EmLLM chatbot based on its ability to accurately predict user stress, provide human-like responses, and assess the therapeutic alliance with the user.

The image is a flowchart illustrating the process of using physiological data in conjunction with deep learning to power a chatbot and potentially a language learning model (LLM). It shows the following steps: 1. 'Physiological Data' is collected, including EDA (Electrodermal Activity), BVP (Blood Volume Pulse), EEG (Electroencephalogram), and ST (Skin Temperature). 2. This data is then sent to a 'Database'. 3. The information from the database is processed using 'Deep Learning' techniques, represented by a network of interconnected nodes. 4. The outcome of the deep learning process is fed into a 'Customized LLM' (Language Learning Model). 5. The LLM then informs the responses of a 'Chatbot', depicted as a friendly robot face. 6. The chatbot interacts with a user, represented by a silhouette of a person's head and shoulders. The diagram indicates a loop from the user to the physiological data, suggesting that the user's reactions might be continuously monitored and used to refine the chatbot's interactions.
The Biocybernetic Loop of Physiology-Driven Empathic Large Language Models (EmLLMs) for Mental Stress Monitoring and Control

Daniel Vargas-Diaz, Junghoon Chung, Donghan Hu, Sol Lim, Sang Won Lee

Title: Developing Context-Aware Sit-Stand Desks for Promoting Healthy and Productive Behaviors. Workshop on Office Wellbeing by Design.

Abstract: To mitigate the risk of chronic diseases caused by prolonged sitting, sit-stand desks are promoted as an effective intervention to foster healthy behaviors among knowledge workers by allowing periodic posture switching between sitting and standing. However, conventional systems let users manually switch the mode, and some research visited automated notification systems with pre-set time intervals. While this regular notification can promote healthy behaviors, such notification can act as external interruptions that hinder individuals' working productivity. Notably, knowledge workers are known to be reluctant to change their physical postures when concentrating. To address these issues, we propose considering work context based on their screen activities to encourage computer users to alternate their postures when it can minimize disruption, promoting healthy and productive behaviors. To that end, we are in the process of building a context-aware sit-stand desk that can promote healthy and productive behaviors. To that end, we have completed two modules: an application that monitors users' computer's ongoing activities and a control module that can measure the height of sit-stand desks for data collection and also allows their computer to control the desk height. The collected data includes computer activities, measured desk height, and their willingness to switch to standing modes and will be used to build an LSTM prediction model to suggest optimal time points for posture changes, accompanied by appropriate desk height. In this work, we acknowledge previous relevant research, outline ongoing deployment efforts, and present our plan to validate the effectiveness of our approach via user studies.

The image shows a sequence of four items, suggesting a process of wireless communication between two devices. On the left, there's a laptop computer, followed by a Bluetooth symbol indicating wireless connectivity. The next item is a Raspberry Pi computer, which appears to be receiving or sending a Bluetooth signal. On the far right, there is a schematic representation of a building, likely suggesting remote control or monitoring of building systems via the Raspberry Pi. The overall image represents the connection diagram where we use Bluetooth technology to interface a laptop with a Raspberry Pi for automation or remote operation of a smart stand-up desk.
Connection diagram between the Raspberry Pi and the stand-up desk

Daniel Vargas-Diaz, Jisun Kim, Sulakna Karunaratna, Maegan Reinhardt, Caroline Hornburg, Koeun Choi, Sang Won Lee

Title: TaleMate: Exploring the use of Voice Agents for Parent-Child Joint Reading Experiences. Workshop on Child-centered AI Design

Abstract: Joint reading is a key activity for early learners, with caregiver-child interactions such as questioning and feedback playing an essential role in children's cognitive and linguistic development. However, for some parents, actively engaging children in storytelling can be challenging. To address this, we introduce TaleMate—a platform designed to enhance shared reading by leveraging conversational agents that have been shown to support children's engagement and learning. It features eight unique voice agents, each with a distinct tone and style, capable of portraying specific characters in a story. TaleMate enables a dynamic, participatory reading experience where parents and children can choose which characters they wish to embody. Feedback and suggestions from parents and children have informed the design of TaleMate, highlighting its effectiveness and potential to cater to user-specific needs. Moreover, the system navigates the challenges posed by digital reading tools, such as decreased parent-child interaction, and builds upon the benefits of traditional and digital reading techniques. TaleMate offers an innovative approach to fostering early reading habits, bridging the gap between traditional joint reading practices and the digital reading landscape.

This image is the screenshot of that screen that serves as the interface where users and voice agents can interact to read the story. The interface features 10 key elements. Element number 1 is a button containing an arrow, designed to let users navigate back within the system. Element number 2 is an image that corresponds to the current page of the book. Element number 3 is a column listing the names of the characters; each line in this column is assigned to one sentence, which is further indicated by Element number 6. Elements number 4 and 5 are the icons representing the role and the character, respectively; these serve to remind users of their assigned characters. Element number 7 is a line of text highlighted in green, signaling that it's one of the parents' turns to read that sentence. Elements number 8 to 10 are buttons that function as the 'Next' and 'Back' options. If the user is at the end of a page, the 'Next' button changes to 'Next Page.' Similarly, if they are at the beginning of the page, the 'Back' button changes to 'Last Page.
Reading screen where the user and voice agents alternate turns to read the story

Andrew Jelson, Sang Won Lee

Title: An empirical study to understand how students use ChatGPT for writing essays and how it affects their ownership, In2Writing Workshop

Abstract: As large language models (LLMs) become more powerful and ubiquitous, systems like ChatGPT are increasingly used by students to help them with writing tasks. To better understand how these tools are used, we investigate how students might use an LLM for essay writing, for example, to study the queries asked to ChatGPT and the responses that ChatGPT gives. To that end, we plan to conduct a user study that will record the user writing process and present them with the opportunity to use ChatGPT as an AI assistant. This study's findings will help us understand how these tools are used and how practitioners --- such as educators and essay readers --- should consider writing education and evaluation based on essay writing.

The image shows the prompt editor for our application. The top section has two tabs for users to switch between the writing prompt and ChatGPT. The next section is a writing prompt for detailing a descriptive essay for them to write. Lastly the bottom section of the figure is a text box for users to write out their response and a submission button.
The main page of our web-app. This is the prompt editor.

Adnan Abbas, Sang Won Lee

Title: PITCH: Productivity and Mental Well-being Coaching through Daily Conversational Interaction
Abstract: Efficient task planning is essential for productivity and mental well-being, yet individuals often struggle to create realistic plans and reflect upon their productivity. Leveraging the advancement in artificial intelligence (AI), conversational agents have emerged as a promising tool for enhancing productivity. Our work focuses on externalizing plans through conversation, aiming to solidify intentions and foster focused action, thereby positively impacting their productivity and mental well-being. We share our plan of designing a conversational agent to offer insightful questions and reflective prompts for increasing plan adherence by leveraging the social interactivity of natural conversations. Previous studies have shown the effectiveness of such agents, but many interventions remain static, leading to decreased user engagement over time. To address this limitation, we propose a novel rotation and context-aware prompting strategy, providing users with varied interventions daily. Our system, PITCH, utilizes large language models (LLMs) to facilitate externalization and reflection on daily plans. Through this study, we investigate the impact of externalizing tasks with conversational agents on productivity and mental well-being, and the effectiveness of a rotation strategy in maintaining user engagement.

The image shows the example morning and evening conversation on Slack between PITCH (conversational system) and Adnan (user) during two different scenarios.
Morning and Evening example conversation

Xinyi Zhang, Jake Frohich, Pamela J. Wisniewski, Jin-Hee Cho, Lifu Huang, Sang Won Lee

Title: Generating A Crowdsourced Conversation Dataset to Combat Cybergrooming

Abstract: Cybergrooming emerges as a growing threat to adolescent safety and mental health. One way to combat cybergrooming is to leverage predictive artificial intelligence (AI) to detect predatory behaviors in social media. However, these methods can encounter challenges like false positives and negative implications such as privacy concerns. Another complementary strategy involves using generative artificial intelligence to empower adolescents by educating them about predatory behaviors. To this end, we envision developing state-of-the-art conversational agents to simulate the conversations between adolescents and predators for educational purposes. Yet, one key challenge is the lack of a dataset to train such conversational agents. In this position paper, we present our motivation for empowering adolescents to cope with cybergrooming. We propose to develop large-scale, authentic datasets through an online survey targeting adolescents and parents. We discuss some initial background behind our motivation and proposed design of the survey, such as situating the participants in artificial cybergrooming scenarios, then allowing participants to respond to the survey to obtain their authentic responses.

This image is an example of our survey scenarios. It is a snippet of the conversation between predators (gray dialog bubble) and adolescents (blue dialog bubble). The conversation is triggered by predators generally and we blank adolescent’s last response for collecting the authentic responses from participants.
This image is an example of our survey scenarios. It is a snippet of the conversation between predators (gray dialog bubble) and adolescents (blue dialog bubble). The conversation is triggered by predators generally and we blank adolescent’s last response for collecting the authentic responses from participants.

Ashley Zhang, Xiaohang Tang, Yan Chen, Steve Oney

Title: Making Code Understandable at Scale for Sensemaking in Introductory Programming Education. Workshop on Sensemaking

Abstract: The increasing need of programming skills has driven the exponential growth of CS learners recently. In large programming courses, instructors use programming exercises to give students hands-on opportunities to practice. However, it is challenging to make sense of students' code at scale, because of the complex collection of information and the wide variation among submissions. 

If using students' code submission data in meaningful ways, instructors can get insights into students' thought process, misconceptions, and common trends in class. With limited instruction resources, it is impossible to provide tailored feedback to every individual student. Tools have been designed to facilitate instructor's sensemaking of complex collection of students' code submissions, through combining techniques from Human-Computer Interaction, Visualization, and Natural Language Processing.

This paper will introduce two systems we designed for instructors to make sense of students' code at scale, discuss gaps in instructor's sensemaking process, and explore future design opportunities to support gaps in sensemaking. 

Remote Doctoral Consortium

Poorvesh Dongre

Title: Physiology-Driven Empathic Large Language Models (EmLLMs) for Mental Health Support

Abstract: Wearable devices show promise in monitoring and managing mental health, but gaps exist in accurately predicting users' mental states and cognitively engaging with users to provide mental health support with wearable data. In this proposal, I present the concept of physiology-driven Empathic Large Language Models (EmLLMs) for mental health support. EmLLMs monitor users and their surrounding environment using wearable devices to predict their mental and emotional states and interact with them based on these states. I present the application of this approach for monitoring and managing excess stress in the workplace. To improve the accuracy of stress prediction, I developed a novel Science-Guided Machine Learning (SGML) model that automatically extracts features from raw wearable data. To engage with users cognitively, I developed an (EmLLM) chatbot that provides psychotherapy based on predicted user stress.  I present the SGML model's preliminary findings and results from a pilot user study that evaluates the EmLLM chatbot.

 The image is a diagram detailing a system for providing mental health support. The system seems to be divided into three main components: 1. Data collection: This is represented by a figure in the center with three types of data surrounding it: 'Cognitive Data', 'Physiological Data', and 'Environmental Data'. These are presumably collected from the 'Mind', 'Body', and 'Surrounding Environment' respectively, as indicated by dotted lines connecting the figure to the data types. 2. Processing: The collected data feeds into a box on the right labeled 'LLMs' which stands for Language Learning Models, and this box is connected to 'Machine Learning'. This suggests that the data collected is processed and analyzed using machine learning techniques to inform the responses of the system. 3. Interaction: The output from the machine learning models is then used by an 'Empathic Chatbot', represented by a smiling robot icon. This chatbot is likely designed to interact with individuals in need of mental health support. Above the entire process is the title 'MENTAL HEALTH SUPPORT', indicating that the overall objective of the system is to provide assistance for mental health using an integration of data collection, machine learning, and empathic chatbot interaction.
Overview of the proposed approach of Physiology-Driven Empathic Large Language Models (EmLLMs) for mental health support

Late-Breaking Work

Lanjing Liu, Lan Gao, Nikita Soni, Yaxing Yao

Title: Integrating Family Privacy Education and Informal Learning Spaces: Characteristics, Challenges and Design Opportunities

Abstract: Children face increasing privacy risks and the need to navigate complex choices, while privacy education is not sufficient due to limited education scope and family involvement. We advocate for informal learning spaces (ILS) as a pioneering channel for family-based privacy education, given their established role in holistic technology and digital literacy education, which specifically targets family groups. In this paper, we conducted an interview study with five families to understand revealing current approaches to privacy education and engagement with ILS for family-based learning. Our findings highlight the transformative potential of ILS in family privacy education, considering existing practices and challenges.  We discuss the reasons for family-based privacy education in ILS and identify potential design opportunities.  Additionally, we outline our future work, which involves expanding participant involvement and conducting co-design activities with family groups to create design prototypes.