The purpose of my research is to help people collaboratively build insight around public policy concerns. In practice, I develop and evaluate public engagement technology through controlled experiments and in-the-wild studies, and then synthesize what people convey during system design research into policy analysis and recommendations. As a guide for my system design research, I draw from theory about how groups socialize their members and deliberate policy issues.
During my doctoral work at Cornell University, I have identified the following three questions as a way to organize a program of research around the design of public engagement technology:
Prior to joining Cornell, I worked at the RAND Corporation, where I was fortunate to have exceptional mentors and encouraged to study a wide range of public policy issues—from the design of youth summer learning programs to predictive policing techniques.
Organizations often strive to build a shared understanding about complex problems. Design competitions provide a compelling approach to create incentives and infrastructure for gathering insights about a problem-space. In this paper, we present an analysis of a two-month civic design competition focused on transportation challenges in a major US city. We examine how the event structure, discussion platform, and participant interactions affected how a community collectively discussed design constraints and proposals.
Inspired by policy deliberation methods and iterative writing in crowdsourcing, we developed and evaluated a task in which newcomers to an online policy discussion, before entering the discussion, generate prompts that encourage existing commenters to engage with each other. In an experiment with 453 Amazon Mechanical Turk (AMT) crowd workers, we found that newcomers can often craft acceptable prompts, especially when given guidance on prompt-writing and balanced opinions between the comments they synthesize.
Public concern related to a policy may span a range of topics. As a result, policy discussions struggle to deeply examine any one topic before moving to the next. In policy deliberation research, this is referred to as a problem of topical coherence. In an experiment, we curated the comments in a policy discussion to prioritize arguments for or against a policy proposal, and examined how this curation and participants’ initial positions of support or opposition to the policy affected the coherence of their contributions to existing topics.
Online crowd labor markets often address issues of risk and mistrust between employers and employees from the employers’ perspective, but less often from that of employees. Based on 437 comments posted by crowd workers (Turkers) on the Amazon Mechanical Turk (AMT) participation agreement, we identified work rejection as a major risk that Turkers experience. We argue that making reducing risk and building trust a first-class design goal can lead to solutions that improve outcomes around rejected work for all parties in online labor markets.
Crowd work platforms are becoming popular among researchers in HCI and other fields for social, behavioral, and user experience studies. Platforms like Amazon Mechanical Turk (AMT) connect researchers, who set the studies up as tasks or jobs, to crowd workers recruited to complete the tasks for payment. We report on the lessons we learned about conducting research with crowd workers while running a behavioral experiment in AMT.
Often, attention to "community" focuses on motivating core members or helping newcomers become regulars. However, much of the traffic to online communities comes from people who visit only briefly. We hypothesize that their personal characteristics, design elements of the site, and others' activity all affect the contributions these “one-timers” make. We present the results from an experiment asking Amazon Mechanical Turk ("AMT") workers to comment on the AMT participation agreement in a discussion forum.
Here are a few of the spots I am enjoying at the moment.