139 research outputs found

    Judgments of effort exerted by others are influenced by received rewards

    Get PDF
    Estimating invested effort is a core dimension for evaluating own and others’ actions, and views on the relationship between effort and rewards are deeply ingrained in various societal attitudes. Internal representations of effort, however, are inherently noisy, e.g. due to the variability of sensorimotor and visceral responses to physical exertion. The uncertainty in effort judgments is further aggravated when there is no direct access to the internal representations of exertion – such as when estimating the effort of another person. Bayesian cue integration suggests that this uncertainty can be resolved by incorporating additional cues that are predictive of effort, e.g. received rewards. We hypothesized that judgments about the effort spent on a task will be influenced by the magnitude of received rewards. Additionally, we surmised that such influence might further depend on individual beliefs regarding the relationship between hard work and prosperity, as exemplified by a conservative work ethic. To test these predictions, participants performed an effortful task interleaved with a partner and were informed about the obtained reward before rating either their own or the partner’s effort. We show that higher rewards led to higher estimations of exerted effort in self-judgments, and this effect was even more pronounced for other-judgments. In both types of judgment, computational modelling revealed that reward information and sensorimotor markers of exertion were combined in a Bayes-optimal manner in order to reduce uncertainty. Remarkably, the extent to which rewards influenced effort judgments was associated with conservative world-views, indicating links between this phenomenon and general beliefs about the relationship between effort and earnings in society

    Social Performance Cues Induce Behavioral Flexibility in Humans

    Get PDF
    Behavioral flexibility allows individuals to react to environmental changes, but changing established behavior carries costs, with unknown benefits. Individuals may thus modify their behavioral flexibility according to the prevailing circumstances. Social information provided by the performance level of others provides one possible cue to assess the potential benefits of changing behavior, since out-performance in similar circumstances indicates that novel behaviors (innovations) are potentially useful. We demonstrate that social performance cues, in the form of previous players’ scores in a problem-solving computer game, influence behavioral flexibility. Participants viewed only performance indicators, not the innovative behavior of others. While performance cues (high, low, or no scores) had little effect on innovation discovery rates, participants that viewed high scores increased their utilization of innovations, allowing them to exploit the virtual environment more effectively than players viewing low or no scores. Perceived conspecific performance can thus shape human decisions to adopt novel traits, even when the traits employed cannot be copied. This simple mechanism, social performance feedback, could be a driver of both the facultative adoption of innovations and cumulative cultural evolution, processes critical to human success

    A Case Report: Building communities with training and resources for Open Science trainers

    Get PDF
    To foster responsible research and innovation, research communities, institutions, and funders are shifting their practices and requirements towards Open Science. Open Science skills are becoming increasingly essential for researchers. Indeed general awareness of Open Science has grown among EU researchers, but the practical adoption can be further improved. Recognizing a gap between the needed and the provided training offer, the FOSTER project offers practical guidance and training to help researchers learn how to open up their research within a particular domain or research environment. Aiming for a sustainable approach, FOSTER focused on strengthening the Open Science training capacity by establishing and supporting a community of trainers. The creation of an Open Science training handbook was a first step towards bringing together trainers to share their experiences and to create an open and living knowledge resource. A subsequent series of train-the-trainer bootcamps helped trainers to find inspiration, improve their skills and to intensify exchange within a peer group. Four trainers, who attended one of the bootcamps, contributed a case study on their experiences and how they rolled out Open Science training within their own institutions. On its platform the project provides a range of online courses and resources to learn about key Open Science topics. FOSTER awards users gamification badges when completing courses in order to provide incentives and rewards, and to spur them on to even greater achievements in learning. The paper at hand describes FOSTER Plus’ training strategies, shares the lessons learnt and provides guidance on how to reuse the project’s materials and training approaches

    Planning preclinical confirmatory multicenter trials to strengthen translation from basic to clinical research – a multi-stakeholder workshop report

    Get PDF
    Clinical translation from bench to bedside often remains challenging even despite promising preclinical evidence. Among many drivers like biological complexity or poorly understood disease pathology, preclinical evidence often lacks desired robustness. Reasons include low sample sizes, selective reporting, publication bias, and consequently inflated effect sizes. In this context, there is growing consensus that confirmatory multicenter studies -by weeding out false positives- represent an important step in strengthening and generating preclinical evidence before moving on to clinical research. However, there is little guidance on what such a preclinical confirmatory study entails and when it should be conducted in the research trajectory. To close this gap, we organized a workshop to bring together statisticians, clinicians, preclinical scientists, and meta-researcher to discuss and develop recommendations that are solutionoriented and feasible for practitioners. Herein, we summarize and review current approaches and outline strategies that provide decision-critical guidance on when to start and subsequently how to plan a confirmatory study. We define a set of minimum criteria and strategies to strengthen validity before engaging in a confirmatory preclinical trial, including sample size considerations that take the inherent uncertainty of initial (exploratory) studies into account. Beyond this specific guidance, we highlight knowledge gaps that require further research and discuss the role of confirmatory studies in translational biomedical research. In conclusion, this workshop report highlights the need for close interaction and open and honest debate between statisticians, preclinical scientists, meta-researchers (that conduct research on research), and clinicians already at an early stage of a given preclinical research trajectory

    Planning preclinical confirmatory multicenter trials to strengthen translation from basic to clinical research – a multi-stakeholder workshop report

    Get PDF
    Clinical translation from bench to bedside often remains challenging even despite promising preclinical evidence. Among many drivers like biological complexity or poorly understood disease pathology, preclinical evidence often lacks desired robustness. Reasons include low sample sizes, selective reporting, publication bias, and consequently inflated effect sizes. In this context, there is growing consensus that confirmatory multicenter studies -by weeding out false positives- represent an important step in strengthening and generating preclinical evidence before moving on to clinical research. However, there is little guidance on what such a preclinical confirmatory study entails and when it should be conducted in the research trajectory. To close this gap, we organized a workshop to bring together statisticians, clinicians, preclinical scientists, and meta-researcher to discuss and develop recommendations that are solution-oriented and feasible for practitioners. Herein, we summarize and review current approaches and outline strategies that provide decision-critical guidance on when to start and subsequently how to plan a confirmatory study. We define a set of minimum criteria and strategies to strengthen validity before engaging in a confirmatory preclinical trial, including sample size considerations that take the inherent uncertainty of initial (exploratory) studies into account. Beyond this specific guidance, we highlight knowledge gaps that require further research and discuss the role of confirmatory studies in translational biomedical research. In conclusion, this workshop report highlights the need for close interaction and open and honest debate between statisticians, preclinical scientists, meta-researchers (that conduct research on research), and clinicians already at an early stage of a given preclinical research trajectory

    Workshop Medical Neuroscience Master 19

    No full text

    Teaching Notebooks

    No full text

    Introduction to R

    No full text

    Recorded Talks

    No full text
    corecore