174 research outputs found

    Scientific progress despite irreproducibility: A seeming paradox

    Full text link
    It appears paradoxical that science is producing outstanding new results and theories at a rapid rate at the same time that researchers are identifying serious problems in the practice of science that cause many reports to be irreproducible and invalid. Certainly the practice of science needs to be improved and scientists are now pursuing this goal. However, in this perspective we argue that this seeming paradox is not new, has always been part of the way science works, and likely will remain so. We first introduce the paradox. We then review a wide range of challenges that appear to make scientific success difficult. Next, we describe the factors that make science work-in the past, present, and presumably also in the future. We then suggest that remedies for the present practice of science need to be applied selectively so as not to slow progress, and illustrate with a few examples. We conclude with arguments that communication of science needs to emphasize not just problems but the enormous successes and benefits that science has brought and is now bringing to all elements of modern society.Comment: 3 figure

    Prime diagnosticity in short-term repetition priming: Is primed evidence discounted, even when it reliably indicates the correct answer?

    Get PDF
    The authors conducted 4 repetition priming experiments that manipulated prime duration and prime diagnosticity in a visual forced-choice perceptual identification task. The strength and direction of prime diagnosticity produced marked effects on identification accuracy, but those effects were resistant to subsequent changes of diagnosticity. Participants learned to associate different diagnosticities with primes of different durations but not with primes presented in different colors. Regardless of prime diagnosticity, preference for a primed alternative covaried negatively with prime duration, suggesting that even for diagnostic primes, evidence discounting remains an important factor. A computational model, with the assumption that adaptation to the statistics of the experiment modulates the level of evidence discounting, accounted for these results

    Confusion and Compensation in Visual Perception: Effects of Spatiotemporal Proximity and Selective Attention

    Get PDF
    The authors investigated spatial, temporal, and attentional manipulations in a short-term repetition priming paradigm. Brief primes produced a strong preference to choose the primed alternative, whereas long primes had the opposite effect. However, a 2nd brief presentation of a long prime produced a preference for the primed word despite the long total prime duration. These surprising results are explained by a computational model that posits the offsetting components of source confusion (prime features are confused with target features) and discounting (evidence from primed features is discounted). The authors obtained compelling evidence for these components by showing how they can cooperate or compete through different manipulations of prime salience. The model allows for dissociations between prime salience and the magnitude of priming, thereby providing a unified account of "subliminal" and "supraliminal" priming

    Extraordinary claims, extraordinary evidence? A discussion

    Get PDF
    Roberts (2020, Learning & Behavior, 48[2], 191-192) discussed research claiming honeybees can do arithmetic. Some readers of this research might regard such claims as unlikely. The present authors used this example as a basis for a debate on the criterion that ought to be used for publication of results or conclusions that could be viewed as unlikely by a significant number of readers, editors, or reviewers.Peer reviewe

    How should the advent of large language models affect the practice of science?

    Full text link
    Large language models (LLMs) are being increasingly incorporated into scientific workflows. However, we have yet to fully grasp the implications of this integration. How should the advent of large language models affect the practice of science? For this opinion piece, we have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging in debate. Schulz et al. make the argument that working with LLMs is not fundamentally different from working with human collaborators, while Bender et al. argue that LLMs are often misused and over-hyped, and that their limitations warrant a focus on more specialized, easily interpretable tools. Marelli et al. emphasize the importance of transparent attribution and responsible use of LLMs. Finally, Botvinick and Gershman advocate that humans should retain responsibility for determining the scientific roadmap. To facilitate the discussion, the four perspectives are complemented with a response from each group. By putting these different perspectives in conversation, we aim to bring attention to important considerations within the academic community regarding the adoption of LLMs and their impact on both current and future scientific practices

    Statistics in the service of science : donā€™t let the tail wag the dog

    Get PDF
    Statistical modeling is generally meant to describe patterns in data in service of the broader scientific goal of developing theories to explain those patterns. Statistical models support meaningful inferences when models are built so as to align parameters of the model with potential causal mechanisms and how they manifest in data. When statistical models are instead based on assumptions chosen by default, attempts to draw inferences can be uninformative or even paradoxicalā€”in essence, the tail is trying to wag the dog. These issues are illustrated by van Doorn et al. (this issue) in the context of using Bayes Factors to identify effects and interactions in linear mixed models. We show that the problems identified in their applications (along with other problems identified here) can be circumvented by using priors over inherently meaningful units instead of default priors on standardized scales. This case study illustrates how researchers must directly engage with a number of substantive issues in order to support meaningful inferences, of which we highlight two: The first is the problem of coordination, which requires a researcher to specify how the theoretical constructs postulated by a model are functionally related to observable variables. The second is the problem of generalization, which requires a researcher to consider how a model may represent theoretical constructs shared across similar but non-identical situations, along with the fact that model comparison metrics like Bayes Factors do not directly address this form of generalization. For statistical modeling to serve the goals of science, models cannot be based on default assumptions, but should instead be based on an understanding of their coordination function and on how they represent causal mechanisms that may be expected to generalize to other related scenarios
    • ā€¦
    corecore