9 research outputs found

    Control of anthracnose disease via increased activity of defence related enzymes in ‘Hass’ avocado fruit treated with methyl jasmonate and methyl salicylate

    Get PDF
    Development of anthracnose disease caused by Colletotrichum gloeosporioides Penz. is one of the major issues within the avocado supply chain. Exposure to methyl jasmonate (MeJA) and methyl salicylate (MeSA) vapours 10 and 100 µmol l-1 was investigated as an alternative solution to commercial fungicide - prochloraz® that is currently being used by the industry. The incidence of anthracnose disease was found to be significantly reduced in ‘Hass’ avocado fruit treated with MeJA or MeSA vapours, especially at 100 μmol l-1. The mechanism involved enhanced activity of defence related enzymes, i.e. chitinase, β-1,3-glucanase and PAL, and higher content of epicatechin

    The Crowdsourced Replication Initiative: Investigating Immigration and Social Policy Preferences. Executive Report.

    Get PDF
    In an era of mass migration, social scientists, populist parties and social movements raise concerns over the future of immigration-destination societies. What impacts does this have on policy and social solidarity? Comparative cross-national research, relying mostly on secondary data, has findings in different directions. There is a threat of selective model reporting and lack of replicability. The heterogeneity of countries obscures attempts to clearly define data-generating models. P-hacking and HARKing lurk among standard research practices in this area.This project employs crowdsourcing to address these issues. It draws on replication, deliberation, meta-analysis and harnessing the power of many minds at once. The Crowdsourced Replication Initiative carries two main goals, (a) to better investigate the linkage between immigration and social policy preferences across countries, and (b) to develop crowdsourcing as a social science method. The Executive Report provides short reviews of the area of social policy preferences and immigration, and the methods and impetus behind crowdsourcing plus a description of the entire project. Three main areas of findings will appear in three papers, that are registered as PAPs or in process

    Care and support for elder people with intellectual disabilities beyond the pursuit of ageing 'in place': towards constructing a space to (be)long

    No full text
    Whereas population ageing has been a much debated issue over the last decades, the political and scientific awareness of the longevity of adults with intellectual disabilities is considered relatively new. When the situations of older people with intellectual disabilities are coming into focus, attention has especially been drawn to medical aspects or to expected challenges of their ageing for health and social service providers. In that vein, a prominent yet rather wicked issue has been the question whether either disability care or elderly care services are the right place to meet their needs. This article discusses findings of a qualitative research study that aimed to reconstruct and investigate 10 care trajectories of ageing people with intellectual disabilities. Based on open interviews with the individuals themselves and with significant others from their formal and informal networks, we identified mechanisms in society that allow or deny older people with intellectual disabilities access to certain care settings or welfare provision. Moreover, we discovered concerns, interests and aspirations that are often left unseen. This shows the necessity to go beyond the debate of ‘ageing in (or out of) place’, and challenges us to create opportunities and strategies to create ‘a space to be(long)’

    The Crowdsourced Replication Initiative

    No full text
    Crowdsourced Research on Immigration and Social Policy Preference

    The Crowdsourced Replication Initiative: Investigating Immigration and Social Policy Preferences. Executive Report

    No full text
    Breznau N, Rinke EM, Wuttke A, et al. The Crowdsourced Replication Initiative: Investigating Immigration and Social Policy Preferences. Executive Report. 2019

    Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty.

    Get PDF
    This study explores how researchers' analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers' expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team's workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers' results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings

    Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty

    No full text
    Breznau N, Rinke EM, Wuttke A, et al. Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty. 2021.This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to include conscious and unconscious decisions that researchers make during data analysis and that may lead to diverging results. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of research based on secondary data, we find that research teams reported widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predicted the wide variation in research outcomes. More than 90% of the total variance in numerical results remained unexplained even after accounting for research decisions identified via qualitative coding of each team’s workflow. This reveals a universe of uncertainty that is hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a new explanation for why many scientific hypotheses remain contested. It calls for greater humility and clarity in reporting scientific findings

    Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty

    No full text
    This study explores how researchers' analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers' expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team's workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers' results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings

    How Many Replicators Does It Take to Achieve Reliability? Investigating Researcher Variability in a Crowdsourced Replication

    No full text
    The paper reports findings from a crowdsourced replication. Eighty-four replicator teams attempted to verify results reported in an original study by running the same models with the same data. The replication involved an experimental condition. A “transparent” group received the original study and code, and an “opaque” group received the same underlying study but with only a methods section and description of the regression coefficients without size or significance, and no code. The transparent group mostly verified the original study (95.5%), while the opaque group had less success (89.4%). Qualitative investigation of the replicators’ workflows reveals many causes of non-verification. Two categories of these causes are hypothesized, routine and non-routine. After correcting non-routine errors in the research process to ensure that the results reflect a level of quality that should be present in ‘real-world’ research, the rate of verification was 96.1 in the transparent group and 92.4 in the opaque group. Two conclusions follow: (1) Although high, the verification rate suggests that it would take a minimum of three replicators per study to achieve replication reliability of at least 95 confidence assuming ecological validity in this controlled setting, and (2) like any type of scientific research, replication is prone to errors that derive from routine and undeliberate actions in the research process. The latter suggests that idiosyncratic researcher variability might provide a key to understanding part of the “reliability crisis” in social and behavioral science and is a reminder of the importance of transparent and well documented workflows
    corecore