© 2019 by xyleques. Proudly created with Wix.com

Crowdsensus

August 6, 2018

About

End-user elicitation studies are a popular design method, but their data require substantial time and effort to analyze. In this paper, we present Crowdsensus, a crowd-powered tool that enables researchers to efficiently analyze the results of elicitation studies using subjective human judgment and automatic clustering algorithms. In addition to our own analysis, we asked six expert researchers with experience running and analyzing elicitation studies to analyze an enduser elicitation dataset of 10 functions for operating a web browser, each with 43 voice commands elicited from endusers for a total of 430 voice commands. We used Crowdsensus to gather similarity judgments of these same 430 commands from 410 online crowd workers. The crowd outperformed the experts by arriving at the same results for seven of eight functions and resolving a function where the experts failed to agree. Also, using Crowdsensus was about four times faster than using experts.

 

Research Areas

Crowdsourcing; Design; AI. 

 

My Involvement 

I built a tool to analyzes the results of an end-user elicitation study by using subjective similarity judgment votes from online crowd workers. 

The tool was created using HTML, Javascript, PHP, and MySQL. 

I designed a series of experiments to test the validity and effectiveness of tool. 

I handled all the data collection and analysis.

I was the lead author on the UIST 2018 paper. 

 

Affiliation 

University of Washington 

 

Publications

Ali, A. X., Morris, M.R. and Wobbrock, J.O. (2018). Crowdsourcing similarity judgments for agreement analysis in end-user elicitation studies. Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '18). Berlin, Germany (October 14-17, 2018). New York: ACM Press. To appear.

 

Share on Facebook
Share on Twitter
Please reload