Virginia Tech® home

Kurt Luther's Crowd Intelligence Lab Will Participate at HCOMP and CSCW 2019 with Multiple Papers are Presentations

Kurt Luther's Crowd Intelligence Lab Will Partcipate at HCOMP and CSCW 2019 with Multiple Papers are Presentations

At the HCOMP 2019 (crowdsourcing) conference, October 28-30, Kurt Luther, assistant professor of computer science, and six students are attending and presenting work from his lab:

Kurt Luther's Crowd Intelligence Lab Will Partcipate at HCOMP and CSCW 2019 with Multiple Papers are Presentations

HCOMP 2019 Full Paper

Second Opinion: Supporting Last-Mile Person Identification with Crowdsourcing and Face Recognition

AI-based face recognition technologies often present a shortlist from which a human expert must select correct match(es) while avoiding false positives, which we term the “last-mile problem.” We propose Second Opinion, a web-based software tool that employs a novel crowdsourcing workflow to assist experts in solving the last-mile problem. We evaluated Second Opinion with a mixed-methods lab study involving 10 experts and 300 crowd workers who collaborate to identify people in historical photos. We found that crowds can eliminate 75% of false positives from the highest confidence candidates suggested by face recognition.

At the CSCW 2019 (social computing) conference, November 9-13, Kurt Luther and three students are attending and presenting work from his lab:

Kurt Luther's Crowd Intelligence Lab Will Partcipate at HCOMP and CSCW 2019 with Multiple Papers are Presentations

CSCW 2019 full papers

GroundTruth: Augmenting expert image geolocation with crowdsourcing and shared representations

In this paper, we introduce the concept of shared representations for crowd–augmented expert work, focusing on the complex sensemaking task of image geolocation performed by professional journalists and human rights investigators. We built GroundTruth, an online system that uses three shared representations—a diagram, grid, and heatmap—to allow experts to work with crowds in real time to geolocate images. Our mixed-methods evaluation with 11 experts and 567 crowd workers found that GroundTruth helped experts geolocate images, and revealed challenges and success strategies for expert–crowd interaction. 

Dropping the baton? Understanding errors and bottlenecks in a crowdsourced sensemaking pipeline

In this paper, we conduct a series of studies with 325 crowd workers using a crowd sensemaking pipeline to solve a fictional terrorist plot, focusing on understanding why errors and bottlenecks happen and how they propagate. We classify types of crowd errors and show how the amount and quality of input data influence worker performance. We conclude by suggesting design recommendations for integrated crowdsourcing systems and speculating how a complementary top-down path of the pipeline could refine crowd analyses.

Kurt Luther's Crowd Intelligence Lab Will Partcipate at HCOMP and CSCW 2019 with Multiple Papers are Presentations