About

The purpose of my research is to help people collaboratively build insights around civic concerns, policy issues, and ethical questions. I conduct my system design and public policy research as a cycle that begins with investigations into the opportunities and barriers to informed discussion on collaboration platforms (e.g., Slack) and at news websites (e.g., New York Times). I then use this research about existing systems to develop and evaluate new ways to promote informed discussion through controlled experiments and field studies. As a theoretical lens to inform my research, I apply modern concepts from crowdsourcing to advance century-old practices in public deliberation, because I see great promise in the capability of computing systems to coordinate groups of people to problem-solve a civic issue through informed discussion.

Biography

I am an Assistant Professor of Social Informatics at the University of Texas at Austin in the School of Information. Previously, I was a postdoctoral fellow in the Research Center for Optimal Digital Ethics (ReCODE) as well as the Design Lab at the University of California, San Diego. In 2019, I earned my PhD in Information Science from Cornell University, where I was advised by Gilly Leshed, Dan Cosley (DanCo), and Poppy McLeod. Prior to my PhD, I worked as a project associate at the RAND Corporation, where I reported on a range of topics—from youth summer learning programs to the use and misuse of predictive policing techniques. Additionally, I earned a master's degree in Education Policy at Vanderbilt University Peabody College and studied Economics and History at the University of California, Davis.

Research

Considerations for the Design of Informed Consent in Digital Health Research: Participant Perspectives
Brian McInnis, Ramona Pindus, Daniah Kareem, Camille Nebeker

The research team, prospective participants, and written materials all influence the success of the informed consent process. As digital health research becomes more prevalent, new challenges for successful informed consent are introduced. This exploratory research utilized a human centered design process in which 19 people were enrolled to participate in one of four online focus-groups. Participants discussed their experiences with informed consent, preferences for receiving study information and ideas about alternative consent approaches. Data were analyzed using qualitative methods. Six major themes and sixteen sub-themes were identified that included study information that prospective participants would like to receive, preferences for accessing information and a desire to connect with research team members. Specific to digital health, participants expressed a need to understand how the technologies worked and how the volume of granular personal information would be collected, stored, and shared.

Exploring the Future of Informed Consent: Applying a Service Design Approach
Brian McInnis, Ramona Pindus, Daniah Kareem, Savannah Gamboa, Camille Nebeker

Informed consent is a cornerstone of ethical human subject research. This practice demonstrates the ethical principle of "respect for persons." Our study was designed to imagine an informed consent future, specifically in a digital health context in which informed consent processes are mediated by sociotechnical systems. Design speed-dating workshops were conducted to explore dimensions of the consent communication design space, including social media, interactive quizzes, chat-bots, annotation tools, and virtual learning sessions. To explore both the user experience and how futuristic consent processes might be facilitated, the workshops involved people eligible to participate in digital health research (N=21) and service providers (N=20), including researchers and IRB members. Our findings offer five principles to improve digital informed consent processes: be concise, promote transparency, value time and effort, cultivate trust, and navigate platform risks.

Engagement or Knowledge Retention: Exploring Trade-offs in Promoting Discussion at News Websites
Brian McInnis, Leah Ajmani, Steven Dow

How does presenting comments in a news article affect the ways that readers engage with and retain information about news? This paper presents results from a controlled experiment investigating effects related to different strategies for promoting discussion at news websites (N=336 participants). The strategies include highlighting specific comments about a data visualization, providing prompts with the comments, and annotating prompts on the visualization. By comparison to a simple list of comments (baseline), our analysis found that annotations contributed to higher levels of participant engagement in the discussion, yet lower levels of knowledge retention related to the article. These findings raise new considerations about whether and how to integrate discussion content into news and points toward future content moderation systems that assist in representing and eliciting discussion at news websites.

Reporting the Community Beat: Practices for Moderating Online Discussion at a News Website
Brian McInnis, Leah Ajmani, Lu Sun, Yiwen Hou, Ziwen Zeng, Steven Dow

Due to challenges around low-quality comments and misinformation, many news outlets have opted to turn off commenting features on their websites. The New York Times (NYT), on the other hand, has continued to scale up its online discussion resources to reach large audiences. Through interviews with the NYT moderation team, we present examples of how moderators manage the first ∼24 hours of online discussion after a story breaks, while balancing concerns about journalistic credibility. We discuss how managing comments at the NYT is not merely a matter of content regulation, but can involve reporting from the “community beat” to recognize emerging topics and synthesize the multiple perspectives in a discussion to promote community.

How we write with crowds
Molly Q Feldman and Brian McInnis

Writing is a common task for crowdsourcing researchers exploring complex and creative work. To better understand how we write with crowds, we conducted both a literature review of crowd-writing systems and structured interviews with designers of such systems. We argue that the cognitive process theory of writing described by Flower and Hayes (1981), originally proposed as a theory of how solo writers write, offers a useful analytic lens for examining the design of crowd-writing systems.

Rare, but Valuable: Understanding Data-centered Talk in News Website Comment Sections
Brian McInnis, Lu Sun, Jungwon Shin, Steven Dow

News websites can facilitate global discussions about civic issues, but the financial cost and burden of moderating these forums has forced many to disable their commenting systems. In this paper, we consider the role that data visualizations play in online discussion around a civic issue, through an analysis of how people talk about climate change data in the comment threads at three news websites (i.e., Breitbart news, the Guardian, the New York Times).

How Features of a Civic Design Competition Influence the Collective Understanding of a Problem
Brian McInnis, Xiaotong (Tone) Xu, Steven Dow

Organizations often strive to build a shared understanding about complex problems. Design competitions provide a compelling approach to create incentives and infrastructure for gathering insights about a problem-space. In this paper, we present an analysis of a two-month civic design competition focused on transportation challenges in a major US city. We examine how the event structure, discussion platform, and participant interactions affected how a community collectively discussed design constraints and proposals.

Crafting Policy Discussion Prompts as a Task for Newcomers
Brian McInnis, Gilly Leshed, Dan Cosley

Inspired by policy deliberation methods and iterative writing in crowdsourcing, we developed and evaluated a task in which newcomers to an online policy discussion, before entering the discussion, generate prompts that encourage existing commenters to engage with each other. In an experiment with 453 Amazon Mechanical Turk (AMT) crowd workers, we found that newcomers can often craft acceptable prompts, especially when given guidance on prompt-writing and balanced opinions between the comments they synthesize.

Effects of Comment Curation and Opposition on Coherence in Online Policy Discussion
Brian McInnis, Dan Cosley, Eric Baumer, Gilly Leshed

Public concern related to a policy may span a range of topics. As a result, policy discussions struggle to deeply examine any one topic before moving to the next. In policy deliberation research, this is referred to as a problem of topical coherence. In an experiment, we curated the comments in a policy discussion to prioritize arguments for or against a policy proposal, and examined how this curation and participants’ initial positions of support or opposition to the policy affected the coherence of their contributions to existing topics.

Taking a HIT: Designing around Rejection, Mistrust, Risk, and Workers' Experiences in Amazon Mechanical Turk
Brian McInnis, Dan Cosley, Chaebong Nam, Gilly Leshed

Online crowd labor markets often address issues of risk and mistrust between employers and employees from the employers’ perspective, but less often from that of employees. Based on 437 comments posted by crowd workers (Turkers) on the Amazon Mechanical Turk (AMT) participation agreement, we identified work rejection as a major risk that Turkers experience. We argue that making reducing risk and building trust a first-class design goal can lead to solutions that improve outcomes around rejected work for all parties in online labor markets.

Running user studies with crowd workers
Brian McInnis, Gilly Leshed

Crowd work platforms are becoming popular among researchers in HCI and other fields for social, behavioral, and user experience studies. Platforms like Amazon Mechanical Turk (AMT) connect researchers, who set the studies up as tasks or jobs, to crowd workers recruited to complete the tasks for payment. We report on the lessons we learned about conducting research with crowd workers while running a behavioral experiment in AMT.

One and Done: Factors affecting one-time contributors to ad-hoc online communities
Brian McInnis, Elizabeth Murnane, Dmitry Epstein, Dan Cosley, Gilly Leshed

Often, attention to "community" focuses on motivating core members or helping newcomers become regulars. However, much of the traffic to online communities comes from people who visit only briefly. We hypothesize that their personal characteristics, design elements of the site, and others' activity all affect the contributions these “one-timers” make. We present the results from an experiment asking Amazon Mechanical Turk ("AMT") workers to comment on the AMT participation agreement in a discussion forum.