10 research outputs found

    Measurement of Effective Temperatures in an Aging Colloidal Glass

    Full text link
    We study the thermal fluctuations of an optically confined probe particle, suspended in an aging colloidal suspension, as the suspension transforms from a viscous liquid into an elastic glass. The micron-sized bead forms a harmonic oscillator. By monitoring the equal-time fluctuations of the tracer, at two different laser powers, we determine the temperature of the oscillator, ToT_{\text{o}}. In the ergodic liquid the temperatures of the oscillator and its environment are equal while, in contrast, in a nonequilibrium glassy phase we find that ToT_{\text{o}} substantially exceeds the bath temperature.Comment: 4 pages (minor changes, accepted Phys. Rev. Lett.

    Об инвариантности кинематических базисных матриц планетарных рядов при анализе кинематики трансмиссий транспортных средств

    Get PDF
    Предложена универсальная матричная методика расчета кинематики планетарных механизмов. Показано, что кинематические матричные системы планетарных механизмов, несмотря на отличие в записи с различными значениями характерных параметров сателлитов, инвариантны по отношению к вектору неизвестных.The universal matrix design procedure of kinematics of planetary mechanisms is offered. It is shown, that kinematics matrix systems of planetary mechanisms, despite of distinction in record with different values of characteristic parameters of satellites, invariance in relation to a vector of unknown persons

    The Crowdsourced Replication Initiative: Investigating Immigration and Social Policy Preferences. Executive Report.

    Get PDF
    In an era of mass migration, social scientists, populist parties and social movements raise concerns over the future of immigration-destination societies. What impacts does this have on policy and social solidarity? Comparative cross-national research, relying mostly on secondary data, has findings in different directions. There is a threat of selective model reporting and lack of replicability. The heterogeneity of countries obscures attempts to clearly define data-generating models. P-hacking and HARKing lurk among standard research practices in this area.This project employs crowdsourcing to address these issues. It draws on replication, deliberation, meta-analysis and harnessing the power of many minds at once. The Crowdsourced Replication Initiative carries two main goals, (a) to better investigate the linkage between immigration and social policy preferences across countries, and (b) to develop crowdsourcing as a social science method. The Executive Report provides short reviews of the area of social policy preferences and immigration, and the methods and impetus behind crowdsourcing plus a description of the entire project. Three main areas of findings will appear in three papers, that are registered as PAPs or in process

    Measurement of the non-equilibrium temperature in colloidal glasses using optical tweezers

    No full text
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    The Crowdsourced Replication Initiative: Investigating Immigration and Social Policy Preferences. Executive Report

    No full text
    Breznau N, Rinke EM, Wuttke A, et al. The Crowdsourced Replication Initiative: Investigating Immigration and Social Policy Preferences. Executive Report. 2019

    Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty

    No full text
    Breznau N, Rinke EM, Wuttke A, et al. Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty. 2021.This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to include conscious and unconscious decisions that researchers make during data analysis and that may lead to diverging results. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of research based on secondary data, we find that research teams reported widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predicted the wide variation in research outcomes. More than 90% of the total variance in numerical results remained unexplained even after accounting for research decisions identified via qualitative coding of each team’s workflow. This reveals a universe of uncertainty that is hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a new explanation for why many scientific hypotheses remain contested. It calls for greater humility and clarity in reporting scientific findings

    How Many Replicators Does It Take to Achieve Reliability? Investigating Researcher Variability in a Crowdsourced Replication

    No full text
    The paper reports findings from a crowdsourced replication. Eighty-four replicator teams attempted to verify results reported in an original study by running the same models with the same data. The replication involved an experimental condition. A “transparent” group received the original study and code, and an “opaque” group received the same underlying study but with only a methods section and description of the regression coefficients without size or significance, and no code. The transparent group mostly verified the original study (95.5%), while the opaque group had less success (89.4%). Qualitative investigation of the replicators’ workflows reveals many causes of non-verification. Two categories of these causes are hypothesized, routine and non-routine. After correcting non-routine errors in the research process to ensure that the results reflect a level of quality that should be present in ‘real-world’ research, the rate of verification was 96.1 in the transparent group and 92.4 in the opaque group. Two conclusions follow: (1) Although high, the verification rate suggests that it would take a minimum of three replicators per study to achieve replication reliability of at least 95 confidence assuming ecological validity in this controlled setting, and (2) like any type of scientific research, replication is prone to errors that derive from routine and undeliberate actions in the research process. The latter suggests that idiosyncratic researcher variability might provide a key to understanding part of the “reliability crisis” in social and behavioral science and is a reminder of the importance of transparent and well documented workflows
    corecore