47 research outputs found

    Conceptual and Measurement Issues in Assessing Democratic Backsliding

    Get PDF
    During the past decade, analyses drawing on several democracy measures have shown a global trend of democratic retrenchment. While these democracy measures use radically different methodologies, most partially or fully rely on subjective judgments to produce estimates of the level of democracy within states. Such projects continuously grapple with balancing conceptual coverage with the potential for bias (Munck and Verkuilen 2002; Przeworski et al. 2000). Little and Meng (L&M) (2023) reintroduce this debate, arguing that “objective” measures of democracy show little evidence of recent global democratic backsliding.1 By extension, they posit that time-varying expert bias drives the appearance of democratic retrenchment in measures that incorporate expert judgments. In this article, we engage with (1) broader debates on democracy measurement and democratic backsliding, and (2) L&M’s specific data and conclusions

    Future Atmospheric Rivers and Impacts on Precipitation: Overview of the ARTMIP Tier 2 High‐Resolution Global Warming Experiment

    Get PDF
    Atmospheric rivers (ARs) are long, narrow synoptic scale weather features important for Earth’s hydrological cycle typically transporting water vapor poleward, delivering precipitation important for local climates. Understanding ARs in a warming climate is problematic because the AR response to climate change is tied to how the feature is defined. The Atmospheric River Tracking Method Intercomparison Project (ARTMIP) provides insights into this problem by comparing 16 atmospheric river detection tools (ARDTs) to a common data set consisting of high resolution climate change simulations from a global atmospheric general circulation model. ARDTs mostly show increases in frequency and intensity, but the scale of the response is largely dependent on algorithmic criteria. Across ARDTs, bulk characteristics suggest intensity and spatial footprint are inversely correlated, and most focus regions experience increases in precipitation volume coming from extreme ARs. The spread of the AR precipitation response under climate change is large and dependent on ARDT selection

    State of the climate in 2018

    Get PDF
    In 2018, the dominant greenhouse gases released into Earth’s atmosphere—carbon dioxide, methane, and nitrous oxide—continued their increase. The annual global average carbon dioxide concentration at Earth’s surface was 407.4 ± 0.1 ppm, the highest in the modern instrumental record and in ice core records dating back 800 000 years. Combined, greenhouse gases and several halogenated gases contribute just over 3 W m−2 to radiative forcing and represent a nearly 43% increase since 1990. Carbon dioxide is responsible for about 65% of this radiative forcing. With a weak La Niña in early 2018 transitioning to a weak El Niño by the year’s end, the global surface (land and ocean) temperature was the fourth highest on record, with only 2015 through 2017 being warmer. Several European countries reported record high annual temperatures. There were also more high, and fewer low, temperature extremes than in nearly all of the 68-year extremes record. Madagascar recorded a record daily temperature of 40.5°C in Morondava in March, while South Korea set its record high of 41.0°C in August in Hongcheon. Nawabshah, Pakistan, recorded its highest temperature of 50.2°C, which may be a new daily world record for April. Globally, the annual lower troposphere temperature was third to seventh highest, depending on the dataset analyzed. The lower stratospheric temperature was approximately fifth lowest. The 2018 Arctic land surface temperature was 1.2°C above the 1981–2010 average, tying for third highest in the 118-year record, following 2016 and 2017. June’s Arctic snow cover extent was almost half of what it was 35 years ago. Across Greenland, however, regional summer temperatures were generally below or near average. Additionally, a satellite survey of 47 glaciers in Greenland indicated a net increase in area for the first time since records began in 1999. Increasing permafrost temperatures were reported at most observation sites in the Arctic, with the overall increase of 0.1°–0.2°C between 2017 and 2018 being comparable to the highest rate of warming ever observed in the region. On 17 March, Arctic sea ice extent marked the second smallest annual maximum in the 38-year record, larger than only 2017. The minimum extent in 2018 was reached on 19 September and again on 23 September, tying 2008 and 2010 for the sixth lowest extent on record. The 23 September date tied 1997 as the latest sea ice minimum date on record. First-year ice now dominates the ice cover, comprising 77% of the March 2018 ice pack compared to 55% during the 1980s. Because thinner, younger ice is more vulnerable to melting out in summer, this shift in sea ice age has contributed to the decreasing trend in minimum ice extent. Regionally, Bering Sea ice extent was at record lows for almost the entire 2017/18 ice season. For the Antarctic continent as a whole, 2018 was warmer than average. On the highest points of the Antarctic Plateau, the automatic weather station Relay (74°S) broke or tied six monthly temperature records throughout the year, with August breaking its record by nearly 8°C. However, cool conditions in the western Bellingshausen Sea and Amundsen Sea sector contributed to a low melt season overall for 2017/18. High SSTs contributed to low summer sea ice extent in the Ross and Weddell Seas in 2018, underpinning the second lowest Antarctic summer minimum sea ice extent on record. Despite conducive conditions for its formation, the ozone hole at its maximum extent in September was near the 2000–18 mean, likely due to an ongoing slow decline in stratospheric chlorine monoxide concentration. Across the oceans, globally averaged SST decreased slightly since the record El Niño year of 2016 but was still far above the climatological mean. On average, SST is increasing at a rate of 0.10° ± 0.01°C decade−1 since 1950. The warming appeared largest in the tropical Indian Ocean and smallest in the North Pacific. The deeper ocean continues to warm year after year. For the seventh consecutive year, global annual mean sea level became the highest in the 26-year record, rising to 81 mm above the 1993 average. As anticipated in a warming climate, the hydrological cycle over the ocean is accelerating: dry regions are becoming drier and wet regions rainier. Closer to the equator, 95 named tropical storms were observed during 2018, well above the 1981–2010 average of 82. Eleven tropical cyclones reached Saffir–Simpson scale Category 5 intensity. North Atlantic Major Hurricane Michael’s landfall intensity of 140 kt was the fourth strongest for any continental U.S. hurricane landfall in the 168-year record. Michael caused more than 30 fatalities and 25billion(U.S.dollars)indamages.InthewesternNorthPacific,SuperTyphoonMangkhutledto160fatalitiesand25 billion (U.S. dollars) in damages. In the western North Pacific, Super Typhoon Mangkhut led to 160 fatalities and 6 billion (U.S. dollars) in damages across the Philippines, Hong Kong, Macau, mainland China, Guam, and the Northern Mariana Islands. Tropical Storm Son-Tinh was responsible for 170 fatalities in Vietnam and Laos. Nearly all the islands of Micronesia experienced at least moderate impacts from various tropical cyclones. Across land, many areas around the globe received copious precipitation, notable at different time scales. Rodrigues and Réunion Island near southern Africa each reported their third wettest year on record. In Hawaii, 1262 mm precipitation at Waipā Gardens (Kauai) on 14–15 April set a new U.S. record for 24-h precipitation. In Brazil, the city of Belo Horizonte received nearly 75 mm of rain in just 20 minutes, nearly half its monthly average. Globally, fire activity during 2018 was the lowest since the start of the record in 1997, with a combined burned area of about 500 million hectares. This reinforced the long-term downward trend in fire emissions driven by changes in land use in frequently burning savannas. However, wildfires burned 3.5 million hectares across the United States, well above the 2000–10 average of 2.7 million hectares. Combined, U.S. wildfire damages for the 2017 and 2018 wildfire seasons exceeded $40 billion (U.S. dollars)

    Estimating Latent Traits from Expert Surveys: An Analysis of Sensitivity to Data Generating Process

    Get PDF
    Models for converting expert-coded data to point estimates of latent concepts assume different data-generating processes. In this paper, we simulate ecologically-valid data according to different assumptions, and examine the degree to which common methods for aggregating expert-coded data can recover true values and construct appropriate coverage intervals from these data. We find that hierarchical latent variable models and the bootstrapped mean perform similarly when variation in reliability and scale perception is low; latent variable techniques outperform the mean when variation is high. Hierarchical A-M and IRT models generally perform similarly, though IRT models are often more likely to include true values within their coverage intervals. The median and non-hierarchical latent variable modeling techniques perform poorly under most assumed data generating processes.Earlier drafts presented at the 2018 APSA, EPSA and V-Dem conferences. The authors thank Chris Fariss, John Gerring, Adam Glynn, Dean Lacy and Jeff Staton for their comments on earlier drafts of this paper. This material is based upon work supported by the National Science Foundation (SES-1423944, PI: Daniel Pemstein), Riksbankens Jubileumsfond (M13-0559:1, PI: Sta ffan I. Lindberg), the Swedish Research Council (2013.0166, PI: Staff an I. Lindberg and Jan Teorell); the Knut and Alice Wallenberg Foundation (PI: Staff an I. Lindberg) and the University of Gothenburg (E 2013/43), as well as internal grants from the Vice-Chancellor's o ffice, the Dean of the College of Social Sciences, and the Department of Political Science at University of Gothenburg. We performed simulations and other computational tasks using resources provided by the High Performance Computing section and the Swedish National Infrastructure for Computing at the National Supercomputer Centre in Sweden (SNIC 2017/1-406 and 2018/3-133, PI: Staff an I. Lindberg)

    How and How Much Does Expert Error Matter? Implications for Quantitative Peace Research

    Get PDF
    Expert-coded datasets provide scholars with otherwise unavailable cross-national longitudinal data on important concepts. However, expert coders vary in their reliability and scale perception, potentially resulting in substantial measurement error; this variation may correlate with outcomes of interest, biasing results in analyses that use these data. This latter concern is particularly acute for key concepts in peace research. In this article, I describe potential sources of expert error, focusing on the measurement of identity-based discrimination. I then use expert-coded data on identity-based discrimination to examine 1) the implications of measurement error for quantitative analyses that use expert-coded data, and 2) the degree to which different techniques for aggregating these data ameliorate these issues. To do so, I simulate data with different forms and levels of expert error and regress conflict onset on different aggregations of these data. These analyses yield two important results. First, almost all aggregations show a positive relationship between identity-based discrimination and conflict onset consistently across simulations, in line with the assumed true relationship between the concept and outcome. Second, different aggregation techniques vary in their substantive robustness beyond directionality. A structural equation model provides the most consistently robust estimates, while both the point estimates from an Item Response Theory (IRT) model and the average over expert codings provide similar and relatively robust estimates in most simulations. The median over expert codings and a naive multiple imputation technique yield the least robust estimates.I thank Ruth Carlitz, Carl Henrik Knutsen, Anna L uhrmann and Daniel Pemstein for their comments on earlier drafts of this article. I also thank Juraj Medzihorsky for his many insights throughout this project. This material is based upon work supported by the National Science Foundation (SES-1423944, PI: Daniel Pemstein), Riksbankens Jubileumsfond (M13-0559:1, PI: Sta ffan I. Lindberg), the Swedish Research Council (2013.0166, PI: Staff an I. Lindberg and Jan Teorell); the Knut and Alice Wallenberg Foundation (PI: Staff an I. Lindberg) and the University of Gothenburg (E 2013/43), as well as internal grants from the Vice-Chancellor's o ffice, the Dean of the College of Social Sciences, and the Department of Political Science at University of Gothenburg. I performed simulations and other computational tasks using resources provided by the High Performance Computing section and the Swedish National Infrastructure for Computing at the National Supercomputer Centre in Sweden (SNIC 2017/1-406 and 2018/3-543, PI: Staff an I. Lindberg)

    The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data

    Get PDF
    The Varieties of Democracy (V–Dem) project relies on country experts who code a host of ordinal variables, providing subjective ratings of latent—that is, not directly observable— regime characteristics over time. Sets of around five experts rate each case (country-year observation), and each of these raters works independently. Since raters may diverge in their coding because of either differences of opinion or mistakes, we require system- atic tools with which to model these patterns of disagreement. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these point estimates. In this paper we describe item response theory models that can that account and adjust for differential item functioning (i.e. differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e. random error). We also discuss key challenges specific to applying item response theory to expert-coded cross-national panel data, explain the approaches that we use to address these challenges, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the different forms in which we present model output

    Public opinion in authoritarian regimes: Evidence from Russia

    No full text
    In this study we ask how information about the popularity of illiberal incumbents influences public opinion. Specifically, does information that the incumbent’s approval rating is declining (increasing) encourage some groups to report lower (higher) support? Which groups are more likely to update their views of the authorities in response to information about the levels of support the authorities enjoy in society? To what extent are changes in public opinion sincere, reflecting individuals’ privately held beliefs? Answers to these questions have important implications for research on the origins of incumbent approval and dramatic defection cascades in nondemocratic regimes

    The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data

    Get PDF
    The Varieties of Democracy (V-Dem) project relies on country experts who code a host of ordinal variables, providing subjective ratings of latent- that is, not directly observable- regime characteristics over time. Sets of around five experts rate each case (country-year observation), and each of these raters works independently. Since raters may diverge in their coding because of either differences of opinion or mistakes, we require systematic tools with which to model these patterns of disagreement. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these point estimates. In this paper we describe item response theory models that can that account and adjust for differential item functioning (i.e. differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e. random error). We also discuss key challenges specific to applying item response theory to expert-coded cross-national panel data, explain the approaches that we use to address these challenges, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the different forms in which we present model output.This material is based upon work supported by the National Science Foundation (SES-1423944, PI: Daniel Pemstein), Riksbankens Jubileumsfond (Grant M13-0559:1, PI: Staffan I. Lindberg), the Swedish Research Council (2013.0166, PI: Staffan I. Lindberg and Jan Teorell), the Knut and Alice Wallenberg Foundation (PI: Staffan I. Lindberg), and the University of Gothenburg (E 2013/43); as well as internal grants from the Vice-Chancellor’s office, the Dean of the College of Social Sciences, and the Department of Political Science at University of Gothenburg. Marquardt acknowledges research support from the Russian Academic Excellence Project ‘5-100.’ We performed simulations and other computational tasks using resources provided by the Notre Dame Center for Research Computing (CRC) through the High Performance Computing section and the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Centre in Sweden (SNIC 2016/1-382, SNIC 2017/1-406 and 2017/1-68). We specifically acknowledge the assistance of In-Saeng Suh at CRC and Johan Raber and Peter Mu ̈nger at SNIC in facilitating our use of their respective systems
    corecore