73 research outputs found
Binary response format or 11-point scale? : Measuring justice evaluations of earnings in the SOEP
Questions on justice of earnings are regularly fielded in large-scale surveys but insights into the role of response formats on measures of the justice of earnings are missing. This problem is illustrated by the German Socio-Economic Panel Study (SOEP), which, in 2017, changed its question on the justice of one’s own earnings from a binary response scale to an 11-point scale. Meanwhile, the share of respondents evaluating their earnings as just dropped considerably, leaving unclear how methodological and substantive effects are intertwined. Addressing this gap, we analysed a survey experiment in the 2016 Innovation Sample of the SOEP (SOEP-IS). In a split-ballot design, 2562 employed SOEP-IS respondents were randomly allocated to one of two experimental groups: receiving either the binary scale or the 11-point scale. Our results show that a lower share of respondents evaluated their earnings as just in the 11-point scale condition. However, follow-up questions on the just amount of earnings were unaffected by the question format. We conclude that it is crucial for researchers investigating justice evaluations of one’s own earnings to account for these measurement effects as well as for practitioners to carefully document and test the effects of changes in response format
Justice Evaluation of the Income Distribution (JEID): Development and validation of a short scale for the subjective assessment of objective differences in earnings
Justice evaluations are proposed to provide a link between the objective level of inequality and the consequences at the individual and societal level. Available instruments, however, focus on the subjective perception of inequality and income distributions. In light of findings that subjective perceptions of inequality and income levels can be biased and subject to method effects, we present the newly developed Justice Evaluation of the Income Distribution (JEID) Scale, which captures justice evaluations of the actual earnings distribution. JEID comprises five items that provide respondents with earnings information for five groups at different segments along the distribution of earnings in a given country. We provide a German-language and an English-language version of the scale. The German-language version was developed and validated based on three comprehensive heterogeneous quota samples from Germany; the translated English-language version was validated in one comprehensive heterogeneous quota sample from the UK. Using latent profile analysis and k-means clustering, we identified three typical response patterns, which we labeled “inequality averse,” “bottom-inequality averse,” and “status quo justification.” JEID was found to be related to normative orientations in the sense that egalitarian views were associated with stronger injustice evaluations at the bottom and top ends of the earnings distribution. With a completion time of between 1.50 and 2.75 min, the JEID scale can be applied in any self-report survey in the social sciences to investigate the distribution, precursors, and consequences of individuals’ subjective evaluations of objective differences in earnings
Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty
This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings
Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty
Significance
Will different researchers converge on similar findings when analyzing the same data? Seventy-three independent research teams used identical cross-country survey data to test a prominent social science hypothesis: that more immigration will reduce public support for government provision of social policies. Instead of convergence, teams’ results varied greatly, ranging from large negative to large positive effects of immigration on social policy support. The choices made by the research teams in designing their statistical tests explain very little of this variation; a hidden universe of uncertainty remains. Considering this variation, scientists, especially those working with the complexities of human societies and behavior, should exercise humility and strive to better account for the uncertainty in their work. Abstract
This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings
Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty
This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings
The Crowdsourced Replication Initiative: Investigating Immigration and Social Policy Preferences. Executive Report.
In an era of mass migration, social scientists, populist parties and social movements raise concerns over the future of immigration-destination societies. What impacts does this have on policy and social solidarity? Comparative cross-national research, relying mostly on secondary data, has findings in different directions. There is a threat of selective model reporting and lack of replicability. The heterogeneity of countries obscures attempts to clearly define data-generating models. P-hacking and HARKing lurk among standard research practices in this area.This project employs crowdsourcing to address these issues. It draws on replication, deliberation, meta-analysis and harnessing the power of many minds at once. The Crowdsourced Replication Initiative carries two main goals, (a) to better investigate the linkage between immigration and social policy preferences across countries, and (b) to develop crowdsourcing as a social science method. The Executive Report provides short reviews of the area of social policy preferences and immigration, and the methods and impetus behind crowdsourcing plus a description of the entire project. Three main areas of findings will appear in three papers, that are registered as PAPs or in process
Distributive justice: definition, determinants, and consequences of the justice of earnings
Adriaans J. Distributive justice: definition, determinants, and consequences of the justice of earnings. Bielefeld: Universität Bielefeld; 2023
Fairness of earnings in Europe: the consequences of unfair under- and overreward for life satisfaction
Adriaans J. Fairness of earnings in Europe: the consequences of unfair under- and overreward for life satisfaction. European Sociological Review. 2022.A large percentage of workers in Europe perceive their earnings to be unfairly low. Such perceptions of unfairness can have far-reaching consequences, ranging from low satisfaction to poor health. To gain insight into the conditions that can attenuate or amplify these adverse consequences, comparative research on the role of country contexts in shaping responses to perceived unfairness is needed. Furthermore, justice theory proposes that both types of perceived unfairness—underreward and overreward—cause distress, but evidence on overreward from representative survey data is scarce and laboratory studies have produced mixed results. Data from the European Social Survey (collected in 2018/2019) offer a means of addressing both of these gaps in the research. Studying the association between perceived fairness of personal earnings and life satisfaction in a cross-section of 29 European countries, I find that both underreward and overreward are associated with lower life satisfaction. This relationship is more pronounced in countries where the equity norm is strongly legitimized and weaker in countries where the trade union density is high
Basic social justice orientations—measuring order-related justice in the European Social Survey Round 9
Adriaans J, Fourré M. Basic social justice orientations—measuring order-related justice in the European Social Survey Round 9. Measurement Instruments for the Social Sciences. 2022;4(1): 11.Individuals hold normative ideas about the just distribution of goods and burdens within a social aggregate. These normative ideas guide the evaluation of existing inequalities and refer to four basic principles: (1)Equalitystands for an equal distribution of rewards and burdens. While the principle of (2) needtakes individual contributions into account, (3)equitysuggests a distribution based on merit. The (4)entitlementprinciple suggests that ascribed (e.g., gender) and achieved status characteristics (e.g., occupational prestige) should determine the distribution of goods and burdens. Past research has argued that preferences for these principles vary with social position as well as the social structure of a society. The Basic Social Justice Orientations (BSJO) scale was developed to assess agreement with the four justice principles but so far has only been fielded in Germany. Round 9 of the European Social Survey (ESS R9 with data collected in 2018/2019) is the first time; four items of the BSJO scale (1 item per justice principle) were included in a cross-national survey program, offering the unique opportunity to study both within and between country variation. To facilitate substantive research on preference for equality, equity, need, and entitlement, this report provides evidence on measurement quality in 29 European countries from ESS R9. Analyzing response distributions, non-response, reliability, and associations with related variables, we find supportive evidence that the four items of the BSJO scale included in ESS R9 produce low non-response rates, estimate agreement with the four distributive principles reliably, and follow expected correlations with related concepts. Researchers should, however, remember that the BSJO scale, as implemented in the ESS R9, only provides manifest indicators, which therefore may not cover the full spectrum of the underlying distributive principles but focus on specific elements of it
Gender Differences in Fairness Evaluations of Own Earnings in 28 European Countries
Research on gender differences in fairness evaluations of own earnings in 28 European countries based on ESS Round 9 dat
- …