56 research outputs found

    Public Opinion and Social Policy

    Get PDF
    An investigation of public opinion and social policy. This dissertation looks at how attitudes toward social welfare policies are formed, and their reciprocal relationship with spending on welfare. The perspective is cross-national and looks at the most advanced democracies in the world, plus two formerly Communist countries. The dissertation utilizes theoretical frameworks from sociology, social psychology and political science. It concludes that self-interest, group dynamics and ideology are driving forces behind opinion and attitude formation, although ideology may be even more important. Also, that social norms related to Communism or socialism, individualism and corporatism shape public opinions. Finally, it concludes that although public opinion probably has an impact on social policy, this does not appear to be a general impact across democratic societies. Instead, institutional norms align opinion and policy into a cross-national pattern. The stability of social spending indicates that path dependency of policy impacts the otherwise erratic nature of public opinion

    Science by press conference: what the Heinsberg Study on COVID-19 demonstrates about the dangers of fast, open science.

    Get PDF
    COVID-19 has accelerated calls for fast, open science to inform policy responses. However, when contradictory or false results become public, the negative consequences of this becomes hard to contain. Nate Breznau discusses the Heinsberg Study into COVID-19, outlining how the lack of appropriate scientific scrutiny led to policy responses that were misinformed and dangerous. Breznau argues that for fast science to be reliable, it needs to be underscored by access and transparency. Otherwise, it risks becoming fake news

    Observing many researchers using the same data and hypothesis reveals a hidden universe of data analysis

    Get PDF
    Findings from 162 researchers in 73 teams testing the same hypothesis with the same data reveal a universe of unique analytical possibilities leading to a broad range of results and conclusions. Surprisingly, the outcome variance mostly cannot be explained by variations in researchers’ modeling decisions or prior beliefs. Each of the 1,261 test models submitted by the teams was ultimately a unique combination of data-analytical steps. Because the noise generated in this crowdsourced research mostly cannot be explained using myriad meta-analytic methods, we conclude that idiosyncratic researcher variability is a threat to the reliability of scientific findings. This highlights the complexity and ambiguity inherent in the scientific data analysis process that needs to be taken into account in future efforts to assess and improve the credibility of scientific work

    Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty

    Full text link
    This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings

    Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty

    Get PDF
    This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings

    Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty

    Get PDF
    This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings

    Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty

    Get PDF
    Significance Will different researchers converge on similar findings when analyzing the same data? Seventy-three independent research teams used identical cross-country survey data to test a prominent social science hypothesis: that more immigration will reduce public support for government provision of social policies. Instead of convergence, teams’ results varied greatly, ranging from large negative to large positive effects of immigration on social policy support. The choices made by the research teams in designing their statistical tests explain very little of this variation; a hidden universe of uncertainty remains. Considering this variation, scientists, especially those working with the complexities of human societies and behavior, should exercise humility and strive to better account for the uncertainty in their work. Abstract This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings

    The replication crisis has led to positive structural, procedural, and community changes

    Get PDF
    The emergence of large-scale replication projects yielding successful rates substantially lower than expected caused the behavioural, cognitive, and social sciences to experience a so-called ‘replication crisis’. In this Perspective, we reframe this ‘crisis’ through the lens of a credibility revolution, focusing on positive structural, procedural and community-driven changes. Second, we outline a path to expand ongoing advances and improvements. The credibility revolution has been an impetus to several substantive changes which will have a positive, long-term impact on our research environment
    • …
    corecore