My research uses quantitative methods to study how mass and elite politics today could shape policies to ensure an equitable future. The three main focal areas are the present and future politics of the American welfare state, the governance of artificial intelligence (AI), and survey methodology. Though rooted in political science, my work converses with a wide range of disciplines, including economics, computer science, and experimental psychology.
Labor-saving technology has already decreased employment opportunities for middle-skill workers. Experts anticipate that advances in AI and robotics will cause even more significant disruptions in the labor market over the next two decades. This paper presents three experimental studies that investigate how this profound economic change could affect mass politics. Recent observational studies suggest that workers’ exposure to automation risk predicts their support not only for redistribution but also for right-wing populist policies and candidates. Other observational studies, including my own, find that workers underestimate the impact of automation on their job security. Misdirected blame towards immigrants and workers in foreign countries, rather than concerns about workplace automation, could be driving support for right-wing populism. To correct American workers’ beliefs about the threats to their jobs, I conducted three survey experiments in which I informed workers about the existent and future impact of workplace automation. While these informational treatments convinced workers that automation threatens American jobs, they failed to change respondents’ preferences on welfare, immigration, and trade policies. My research finds that raising awareness about workplace automation did not decrease opposition to globalization or increase support for policies that will prepare workers for future technological disruptions.
This report presents a broad look at the American public’s attitudes toward artificial intelligence (AI) and AI governance, based on findings from a nationally representative survey of 2,000 American adults. As the study of the public opinion toward AI is relatively new, we aimed for breadth over depth, with our questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Our results provide preliminary insights into the character of US public opinion regarding AI.
Researchers in different social science disciplines have successfully used Facebook to recruit subjects for their studies. However, such convenience samples are not generally representative of the population. We developed and validated a new quota sampling method to recruit respondents using Facebook advertisements. Additionally, we published an R package to semi-automate this quota sampling process using the Facebook Marketing API. To test the method, we used Facebook advertisements to quota sample 2432 US respondents for a survey on climate change public opinion. We conducted a contemporaneous nationally representative survey asking identical questions using a high-quality online survey panel whose respondents were recruited using probability sampling. Many results from the Facebook-sampled survey are similar to those from the online panel survey; furthermore, results from the Facebook-sampled survey approximate results from the American Community Survey (ACS) for a set of validation questions. These findings suggest that using Facebook to recruit respondents is a viable option for survey researchers wishing to approximate population-level public opinion.
Social science scholars routinely evaluate the efficacy of diverse climate frames using local convenience or nationally representative samples. For example, previous research has focused on communicating the scientific consensus on climate change, which has been identified as a ‘gateway’ cognition to other key beliefs about the issue. Importantly, although these efforts reveal average public responsiveness to particular climate frames, they do not describe variation in message effectiveness at the spatial and political scales relevant for climate policymaking. Here we use a small-area estimation method to map geographical variation in public responsiveness to information about the scientific consensus as part of a large-scale randomized national experiment (n = 6,301). Our survey experiment finds that, on average, public perception of the consensus increases by 16 percentage points after message exposure. However, substantial spatial variation exists across the United States at state and local scales. Crucially, responsiveness is highest in more conservative parts of the country, leading to national convergence in perceptions of the climate science consensus across diverse political geographies. These findings not only advance a geographical understanding of how the public engages with information about scientific agreement, but will also prove useful for policymakers, practitioners and scientists engaged in climate change mitigation and adaptation.
Survey experiments often manipulate the description of attributes in a hypothetical scenario, with the goal of learning about those attributes’ real-world effects. Such inferences rely on an underappreciated assumption: experimental conditions must be information equivalent (IE) with respect to background features of the scenario. IE is often violated because subjects, when presented with information about one attribute, update their beliefs about others too. Labeling a country “a democracy,” for example, affects subjects’ beliefs about the country’s geographic location. When IE is violated, the effect of the manipulation need not correspond to the quantity of interest (the effect of beliefs about the focal attribute). We formally define the IE assumption, relating it to the exclusion restriction in instrumental-variable analysis. We show how to predict IE violations ex ante and diagnose them ex post with placebo tests. We evaluate three strategies for achieving IE. Abstract encouragement is ineffective. Specifying background details reduces imbalance on the specified details and highly correlated details, but not others. Embedding a natural experiment in the scenario can reduce imbalance on all background beliefs, but raises other issues. We illustrate with four survey experiments, focusing on an extension of a prominent study of the democratic peace.
Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.
IMPORTANCE: The Syrian civil war has resulted in large-scale devastation of Syria's health infrastructure along with widespread injuries and death from trauma. The capacity of Syrian trauma hospitals is not well characterized. Data are needed to allocate resources for trauma care to the population remaining in Syria.
OBJECTIVE: To identify the number of trauma hospitals operating in Syria and to delineate their capacities.
DESIGN, SETTING, AND PARTICIPANTS: From February 1 to March 31, 2015, a nationwide survey of 94 trauma hospitals was conducted inside Syria, representing a coverage rate of 69% to 93% of reported hospitals in nongovernment controlled areas.
MAIN OUTCOMES: Identification and geocoding of trauma and essential surgical services in Syria.
RESULTS: Although 86 hospitals (91%) reported capacity to perform emergency surgery, 1 in 6 hospitals (16%) reported having no inpatient ward for patients after surgery. Sixty-three hospitals (70%) could transfuse whole blood but only 7 (7.4%) could separate and bank blood products. Seventy-one hospitals (76%) had any pharmacy services. Only 10 (11%) could provide renal replacement therapy, and only 18 (20%) provided any form of rehabilitative services. Syrian hospitals are isolated, with 24 (26%) relying on smuggling routes to refer patients to other hospitals and 47 hospitals (50%) reporting domestic supply lines that were never open or open less than daily. There were 538 surgeons, 378 physicians, and 1444 nurses identified in this survey, yielding a nurse to physician ratio of 1.8:1. Only 74 hospitals (79%) reported any salary support for staff, and 84 (89%) reported material support. There is an unmet need for biomedical engineering support in Syrian trauma hospitals, with 12 fixed x-ray machines (23%), 11 portable x-ray machines (13%), 13 computed tomographic scanners (22%), 21 adult (21%) and 5 pediatric (19%) ventilators, 14 anesthesia machines (10%), and 116 oxygen cylinders (15%) not functional. No functioning computed tomographic scanners remain in Aleppo, and 95 oxygen cylinders (42%) in rural Damascus are not functioning despite the high density of hospitals and patients in both provinces.
CONCLUSIONS AND RELEVANCE: Syrian trauma hospitals operate in the Syrian civil war under severe material and human resource constraints. Attention must be paid to providing biomedical engineering support and to directing resources to currently unsupported and geographically isolated critical access surgical hospitals.