Research

I use quantiative social science methods to study trust in digital technology and the governance of artificial intelligence (AI). Currently, I work on the following topics: (1) public and elite opinion toward AI, (2) how the American welfare state could adapt to the increasing automation of labor, and (3) attitudes toward Covid-19 surveillance technology. I also have a research stream focused on improving survey methodology. Though rooted in political science, my work converses with a wide range of disciplines, including economics, computer science, and experimental psychology.

Publications

  1. Baobao Zhang, Sarah Kreps, Nina McMurry, and R. Miles McCain. "Americans' Perceptions of Privacy and Surveillance in the COVID-19 Pandemic." PLoS ONE 15(12): e0242652. Replication files. Coverage in Bloomberg and IEEE Spectrum; shared with the World Health Organization.

    Objective: To study the U.S. public’s attitudes toward surveillance measures aimed at curbing the spread of COVID-19, particularly smartphone applications (apps) that supplement traditional contact tracing.

    Method: We deployed a survey of approximately 2,000 American adults to measure support for nine COVID-19 surveillance measures. We assessed attitudes toward contact tracing apps by manipulating six different attributes of a hypothetical app through a conjoint analysis experiment.

    Results: A smaller percentage of respondents support the government encouraging everyone to download and use contact tracing apps (42%) compared with other surveillance measures such as enforcing temperature checks (62%), expanding traditional contact tracing (57%), carrying out centralized quarantine (49%), deploying electronic device monitoring (44%), or implementing immunity passes (44%). Despite partisan differences on a range of surveillance measures, support for the government encouraging digital contact tracing is indistinguishable between Democrats (47%) and Republicans (46%), although more Republicans oppose the policy (39%) compared to Democrats (27%). Of the app features we tested in our conjoint analysis experiment, only one had statistically significant effects on the self-reported likelihood of downloading the app: decentralized data architecture increased the likelihood by 5.4 percentage points.

    Conclusion: Support for public health surveillance policies to curb the spread of COVID-19 is relatively low in the U.S. Contact tracing apps that use decentralized data storage, compared with those that use centralized data storage, are more accepted by the public. While respondents’ support for expanding traditional contact tracing is greater than their support for the government encouraging the public to download and use contact tracing apps, there are smaller partisan differences in support for the latter policy.

  2. Sarah Kreps, Sandip Prasad, ..., Baobao Zhang, Douglas L. Kriner. "Factors Associated with US Adults’ Likelihood of Accepting COVID-19 Vaccination: Evidence from a Choice-Based Conjoint Analysis." JAMA Network Open, 2020;3(10):e2025594. Replication files; journal commentary. Coverage by USA Today and Vox.

    Importance: The development of a coronavirus disease 2019 (COVID-19) vaccine has progressed at unprecedented speed. Widespread public uptake of the vaccine is crucial to stem the pandemic.

    Objective: To examine the factors associated with survey participants’ self-reported likelihood of selecting and receiving a hypothetical COVID-19 vaccine.

    Design, Setting, and Participants: A survey study of a nonprobability convenience sample of 2000 recruited participants including a choice-based conjoint analysis was conducted to estimate respondents’ probability of choosing a vaccine and willingness to receive vaccination. Participants were asked to evaluate their willingness to receive each hypothetical vaccine individually. The survey presented respondents with 5 choice tasks. In each, participants evaluated 2 hypothetical COVID-19 vaccines and were asked whether they would choose vaccine A, vaccine B, or neither vaccine. Vaccine attributes included efficacy, protection duration, major adverse effects, minor adverse effects, US Food and Drug Administration (FDA) approval process, national origin of vaccine, and endorsement. Levels of each attribute for each vaccine were randomly assigned, and attribute order was randomized across participants. Survey data were collected on July 9, 2020.

    Main Outcomes and Measures: Average marginal component effect sizes and marginal means were calculated to estimate the relationship between each vaccine attribute level and the probability of the respondent choosing a vaccine and self-reported willingness to receive vaccination.

    Results: A total of 1971 US adults responded to the survey (median age, 43 [interquartile range, 30-58] years); 999 (51%) were women, 1432 (73%) White, 277 (14%) were Black, and 190 (10%) were Latinx. An increase in efficacy from 50% to 70% was associated with a higher probability of choosing a vaccine (coefficient, 0.07; 95% CI, 0.06-0.09), and an increase from 50% to 90% was associated with a higher probability of choosing a vaccine (coefficient, 0.16; 95% CI, 0.15-0.18). An increase in protection duration from 1 to 5 years was associated with a higher probability of choosing a vaccine (coefficient, 0.05 95% CI, 0.04-0.07). A decrease in the incidence of major adverse effects from 1 in 10 000 to 1 in 1 000 000 was associated with a higher probability of choosing a vaccine (coefficient, 0.07; 95% CI, 0.05-0.08). An FDA emergency use authorization was associated with a lower probability of choosing a vaccine (coefficient, −0.03; 95% CI, −0.04 to −0.01) compared with full FDA approval. A vaccine that originated from a non-US country was associated with a lower probability of choosing a vaccine (China: −0.13 [95% CI, −0.15 to −0.11]; UK: −0.04 [95% CI, −0.06 to −0.02]). Endorsements from the US Centers for Disease Control and Prevention (coefficient, 0.09; 95% CI, 0.07-0.11) and the World Health Organization (coefficient, 0.06; 95% CI, 0.04-0.08), compared with an endorsement from President Trump were associated with higher probabilities of choosing a vaccine. Analyses of participants’ willingness to receive each vaccine when assessed individually yielded similar results. An increase in efficacy from 50% to 90% was associated with a 10% higher marginal mean willingness to receive a vaccine (from 0.51 to 0.61). A reduction in the incidence of major side effects was associated with a 4% higher marginal mean willingness to receive a vaccine (from 0.54 to 0.58). A vaccine originating in China was associated with a 10% lower willingness to receive a vaccine vs one developed in the US (from 0.60 to 0.50) Endorsements from the Centers for Disease Control and Prevention and World Health Organization were associated with increases in willingness to receive a vaccine (7% and 6%, respectively) from a baseline endorsement by President Trump (from 0.52 to 0.59 and from 0.52 to 0.58, respectively).

    Conclusions and Relevance: In this survey study of US adults, vaccine-related attributes and political characteristics were associated with self-reported preferences for choosing a hypothetical COVID-19 vaccine and self-reported willingness to receive vaccination. These results may help inform public health campaigns to address vaccine hesitancy when a COVID-19 vaccine becomes available.

  3. Baobao Zhang and Matto Mildenberger, 2020. "Scientists' Political Behaviors Are Not Driven by Individual-level Government Benefits." PLOS One, 15(5):e0230961. Replication files.

    Is it appropriate for scientists to engage in political advocacy? Some political critics of scientists argue that scientists have become partisan political actors with self-serving financial agendas. However, most scientists strongly reject this view. While social scientists have explored the effects of science politicization on public trust in science, little empirical work directly examines the drivers of scientists’ interest in and willingness to engage in political advocacy. Using a natural experiment involving the U.S. National Science Foundation Graduate Research Fellowship (NSF-GRF), we causally estimate for the first time whether scientists who have received federal science funding are more likely to engage in both science-related and non-science-related political behaviors. Comparing otherwise similar individuals who received or did not receive NSF support, we find that scientists’ preferences for political advocacy are not shaped by receiving government benefits. Government funding did not impact scientists’ support of the 2017 March for Science nor did it shape the likelihood that scientists donated to either Republican or Democratic political groups. Our results offer empirical evidence that scientists’ political behaviors are not motivated by self-serving financial agendas. They also highlight the limited capacity of even generous government support programs to increase civic participation by their beneficiaries.

  4. Baobao Zhang and Allan Dafoe, 2020. "U.S. Public Opinion on the Governance of Artificial Intelligence." Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society. Replication files.

    Artificial intelligence (AI) has widespread societal implications, yet social scientists are only beginning to study public attitudes toward the technology. Existing studies find that the public's trust in institutions can play a major role in shaping the regulation of emerging technologies. Using a large-scale survey (N = 2000), we examined Americans' perceptions of 13 AI governance challenges as well as their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI. While Americans perceive all of the AI governance issues to be important for tech companies and governments to manage, they have only low to moderate trust in these institutions to manage AI applications.

  5. Allan Dafoe, Sophia Hatz, and Baobao Zhang, 2020. "Coercion and Provocation." Forthcoming at the Journal of Conflict Resolution.

    Threats and force, by increasing expected costs, should reduce the target's resolve. However, they often seem to increase resolve. We label this phenomenon provocation. We review instances of apparent provocation in interstate relations and offer a theory based on the logic of reputation and honor. We also consider alternative explanations: confounding or mis-imputation of resolve, revelation of information, character, or capabilities, or generalized sunk cost reasoning. Using survey experiments, we systematically evaluate whether provocation exists and what may account for it. We employ design-based causal inference techniques -- a hypothetical natural experiment, a placebo treatment, and ruling out mediators -- to evaluate our key hypotheses. We find strong evidence of provocation, and suggestive evidence that it arises from considerations of honor, vengeance, and reputation. Our experimental design minimizes the risk that this result arises from our alternative explanations.

  6. Baobao Zhang, Matto Mildenberger, Peter D. Howe, Jennifer Marlon, Seth Rosenthal, Anthony Leiserowitz, 2020. "Quota Sampling Using Facebook Advertisements," Political Science Research and Methods, 8(3), 558-564. Replication files.

    Researchers in different social science disciplines have successfully used Facebook to recruit subjects for their studies. However, such convenience samples are not generally representative of the population. We developed and validated a new quota sampling method to recruit respondents using Facebook advertisements. Additionally, we published an R package to semi-automate this quota sampling process using the Facebook Marketing API. To test the method, we used Facebook advertisements to quota sample 2432 US respondents for a survey on climate change public opinion. We conducted a contemporaneous nationally representative survey asking identical questions using a high-quality online survey panel whose respondents were recruited using probability sampling. Many results from the Facebook-sampled survey are similar to those from the online panel survey; furthermore, results from the Facebook-sampled survey approximate results from the American Community Survey (ACS) for a set of validation questions. These findings suggest that using Facebook to recruit respondents is a viable option for survey researchers wishing to approximate population-level public opinion.

  7. Baobao Zhang, Sander van der Linden, Matto Mildenberger, Peter D. Howe, Jennifer Marlon, and Anthony Leiserowitz, 2018. "Experimental Effects of Climate Messages Vary Geographically." Nature Climate Change, 8, 370–374. Journal commentary.

    Social science scholars routinely evaluate the efficacy of diverse climate frames using local convenience or nationally representative samples. For example, previous research has focused on communicating the scientific consensus on climate change, which has been identified as a ‘gateway’ cognition to other key beliefs about the issue. Importantly, although these efforts reveal average public responsiveness to particular climate frames, they do not describe variation in message effectiveness at the spatial and political scales relevant for climate policymaking. Here we use a small-area estimation method to map geographical variation in public responsiveness to information about the scientific consensus as part of a large-scale randomized national experiment (n = 6,301). Our survey experiment finds that, on average, public perception of the consensus increases by 16 percentage points after message exposure. However, substantial spatial variation exists across the United States at state and local scales. Crucially, responsiveness is highest in more conservative parts of the country, leading to national convergence in perceptions of the climate science consensus across diverse political geographies. These findings not only advance a geographical understanding of how the public engages with information about scientific agreement, but will also prove useful for policymakers, practitioners and scientists engaged in climate change mitigation and adaptation.

  8. Allan Dafoe, Baobao Zhang, and Devin Caughey, 2018. "Information Equivalence in Survey Experiments." Political Analysis, 26(4), 399-416. Replication files.

    Survey experiments often manipulate the description of attributes in a hypothetical scenario, with the goal of learning about those attributes’ real-world effects. Such inferences rely on an underappreciated assumption: experimental conditions must be information equivalent (IE) with respect to background features of the scenario. IE is often violated because subjects, when presented with information about one attribute, update their beliefs about others too. Labeling a country “a democracy,” for example, affects subjects’ beliefs about the country’s geographic location. When IE is violated, the effect of the manipulation need not correspond to the quantity of interest (the effect of beliefs about the focal attribute). We formally define the IE assumption, relating it to the exclusion restriction in instrumental-variable analysis. We show how to predict IE violations ex ante and diagnose them ex post with placebo tests. We evaluate three strategies for achieving IE. Abstract encouragement is ineffective. Specifying background details reduces imbalance on the specified details and highly correlated details, but not others. Embedding a natural experiment in the scenario can reduce imbalance on all background beliefs, but raises other issues. We illustrate with four survey experiments, focusing on an extension of a prominent study of the democratic peace.

  9. Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans, 2018. "Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts." Journal of Artificial Intelligence Research, 62, 729-754. No. 16 on Altmetric's top 100 most-discussed articles of 2017.

    Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.

  10. With Mowafi, Hani et al., 2016. "Results of a Nationwide Capacity Survey of Hospitals Providing Trauma Care in War-affected Syria." JAMA Surgery, 151(9), 815-822.

    IMPORTANCE: The Syrian civil war has resulted in large-scale devastation of Syria's health infrastructure along with widespread injuries and death from trauma. The capacity of Syrian trauma hospitals is not well characterized. Data are needed to allocate resources for trauma care to the population remaining in Syria.

    OBJECTIVE: To identify the number of trauma hospitals operating in Syria and to delineate their capacities.

    DESIGN, SETTING, AND PARTICIPANTS: From February 1 to March 31, 2015, a nationwide survey of 94 trauma hospitals was conducted inside Syria, representing a coverage rate of 69% to 93% of reported hospitals in nongovernment controlled areas.

    MAIN OUTCOMES: Identification and geocoding of trauma and essential surgical services in Syria.

    RESULTS: Although 86 hospitals (91%) reported capacity to perform emergency surgery, 1 in 6 hospitals (16%) reported having no inpatient ward for patients after surgery. Sixty-three hospitals (70%) could transfuse whole blood but only 7 (7.4%) could separate and bank blood products. Seventy-one hospitals (76%) had any pharmacy services. Only 10 (11%) could provide renal replacement therapy, and only 18 (20%) provided any form of rehabilitative services. Syrian hospitals are isolated, with 24 (26%) relying on smuggling routes to refer patients to other hospitals and 47 hospitals (50%) reporting domestic supply lines that were never open or open less than daily. There were 538 surgeons, 378 physicians, and 1444 nurses identified in this survey, yielding a nurse to physician ratio of 1.8:1. Only 74 hospitals (79%) reported any salary support for staff, and 84 (89%) reported material support. There is an unmet need for biomedical engineering support in Syrian trauma hospitals, with 12 fixed x-ray machines (23%), 11 portable x-ray machines (13%), 13 computed tomographic scanners (22%), 21 adult (21%) and 5 pediatric (19%) ventilators, 14 anesthesia machines (10%), and 116 oxygen cylinders (15%) not functional. No functioning computed tomographic scanners remain in Aleppo, and 95 oxygen cylinders (42%) in rural Damascus are not functioning despite the high density of hospitals and patients in both provinces.

    CONCLUSIONS AND RELEVANCE: Syrian trauma hospitals operate in the Syrian civil war under severe material and human resource constraints. Attention must be paid to providing biomedical engineering support and to directing resources to currently unsupported and geographically isolated critical access surgical hospitals.

Works in Progress

  1. Baobao Zhang, Markus Anderljung, Lauren Kahn, Noemi Dreksler, Michael C. Horowitz, and Allan Dafoe. Ethics and Governance of Artificial Intelligence Evidence from a Survey of Machine Learning Researchers.

    Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI. To gain insight into their views, we conducted a survey of researchers who published in the top AI/ML conferences ($N$ = 524). We compared these results with those from a 2016 survey of AI/ML researchers and a 2018 survey of the US public. We find that AI/ML researchers place high levels of trust in international organizations, scientific organizations, and the Partnership on AI to shape the development and use of AI in the public interest; moderate trust in most Western tech companies; and low trust in national militaries, Chinese tech companies, and Facebook. While the respondents were overwhelmingly opposed to AI/ML researchers working on lethal autonomous weapons, they are less opposed to researchers working on other military applications of AI, particularly logistics algorithms. A strong majority of respondents think that AI safety research should be prioritized and that ML institutions should conduct pre-publication review to assess potential harms. These results could inform discussion amongst researchers, private sector executives, and policymakers about regulations, governance frameworks, guiding principles, and national strategies for AI.

  2. Baobao Zhang. "No Rage Against the Machines: Threat of Automation Does Not Change Policy Preferences." MIT Political Science Department Research Paper No. 2019-25, 2019.

    Labor-saving technology has already decreased employment opportunities for middle-skill workers. Experts anticipate that advances in AI and robotics will cause even more significant disruptions in the labor market over the next two decades. This paper presents three experimental studies that investigate how this profound economic change could affect mass politics. Recent observational studies suggest that workers’ exposure to automation risk predicts their support not only for redistribution but also for right-wing populist policies and candidates. Other observational studies, including my own, find that workers underestimate the impact of automation on their job security. Misdirected blame towards immigrants and workers in foreign countries, rather than concerns about workplace automation, could be driving support for right-wing populism. To correct American workers’ beliefs about the threats to their jobs, I conducted three survey experiments in which I informed workers about the existent and future impact of workplace automation. While these informational treatments convinced workers that automation threatens American jobs, they failed to change respondents’ preferences on welfare, immigration, and trade policies. My research finds that raising awareness about workplace automation did not decrease opposition to globalization or increase support for policies that will prepare workers for future technological disruptions.

  3. Analysis of 193,618 Trauma Patient Presentations in War-affected Syria from July 2013 to July 2015 (Hani Mowafi, Mahmoud Hariri, Baobao Zhang, ..., Anas Al-Kassem). Revise and resubmit at the Lancet.

Dissertation

My PhD dissertation "Three Essays on the Politics of American Social Programs" presents experimental and quasi-experimental studies that examined the drivers of Americans' attitudes toward social programs. The first and second essays sought to understand how benefiting social programs (e.g., Medicare and governmental scholarships) affected voters’ political attitudes and behavior, a process termed policy feedback. The third essay investigated how expert forecasts about automation's potential impact shaped Americans' beliefs about the future of work and preferences for governmental response.