26 research outputs found

    Research Weaving: Visualizing the Future of Research Synthesis

    Get PDF
    We propose a new framework for research synthesis of both evidence and influence, named research weaving. It summarizes and visualizes information content, history, and networks among a collection of documents on any given topic. Research weaving achieves this feat by combining the power of two methods: systematic mapping and bibliometrics. Systematic mapping provides a snapshot of the current state of knowledge, identifying areas needing more research attention and those ready for full synthesis. Bibliometrics enables researchers to see how pieces of evidence are connected, revealing the structure and development of a field. We explain how researchers can use some or all of these tools to gain a deeper, more nuanced understanding of the scientific literature

    Understanding the sex difference in vulnerability to adolescent depression: an examination of child and parent characteristics

    Get PDF
    This study examined sex differences in risk factors associated with adolescent depression in a large sample of boys and girls. Moderation and mediation explanatory models of the sex difference in likelihood of depression were examined. Findings indicate that the factors associated with depression in adolescent boys and girls are quite similar. All of the variables considered were associated with depression, but sex did not moderate the impact of vulnerability factors on likelihood of depression diagnosis. However, negative self-perceptions in the domains of achievement, global self-worth, and physical appearance partially mediated the relationship between sex and depression. Further, girls had higher levels of positive self-perceptions in interpersonal domains that acted as suppressors and reduced the likelihood of depression in girls. These findings suggest that girls' higher incidence of depression is due in part to their higher levels of negative self-perceptions, whereas positive interpersonal factors serve to protect them from depressive episodes

    Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States

    Get PDF
    Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks

    Search for gravitational waves from Scorpius X-1 in the second Advanced LIGO observing run with an improved hidden Markov model

    Get PDF
    We present results from a semicoherent search for continuous gravitational waves from the low-mass x-ray binary Scorpius X-1, using a hidden Markov model (HMM) to track spin wandering. This search improves on previous HMM-based searches of LIGO data by using an improved frequency domain matched filter, the J-statistic, and by analyzing data from Advanced LIGO's second observing run. In the frequency range searched, from 60 to 650 Hz, we find no evidence of gravitational radiation. At 194.6 Hz, the most sensitive search frequency, we report an upper limit on gravitational wave strain (at 95% confidence) of h095%=3.47×10-25 when marginalizing over source inclination angle. This is the most sensitive search for Scorpius X-1, to date, that is specifically designed to be robust in the presence of spin wandering. © 2019 American Physical Society

    The United States COVID-19 Forecast Hub dataset

    Get PDF
    Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages

    Divide and conquer? Size adjustment with allometry and intermediate outcomes

    No full text
    Abstract Many trait measurements are size-dependent, and while we often divide these traits by size before fitting statistical models to control for the effect of size, this approach does not account for allometry and the intermediate outcome problem. We describe these problems and outline potential solutions

    The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio)

    Get PDF
    Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34–0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes

    Improving quantitative synthesis to achieve generality in ecology

    No full text
    Synthesis of primary ecological data is often assumed to achieve a notion of ‘generality’, through the quantification of overall effect sizes and consistency among studies, and has become a dominant research approach in ecology. Unfortunately, ecologists rarely define either the generality of their findings, their estimand (the target of estimation) or the population of interest. Given that generality is fundamental to science, and the urgent need for scientific understanding to curb global scale ecological breakdown, loose usage of the term ‘generality’ is problematic. In other disciplines, generality is defined as comprising both generalizability—extending an inference about an estimand from the sample to the population—and transferability—the validity of estimand predictions in a different sampling unit or population. We review current practice in ecological synthesis and demonstrate that, when researchers fail to define the assumptions underpinning generalizations and transfers of effect sizes, generality often misses its target. We provide guidance for communicating nuanced inferences and maximizing the impact of syntheses both within and beyond academia. We propose pathways to generality applicable to ecological syntheses, including the development of quantitative and qualitative criteria with which to license the transfer of estimands from both primary and synthetic studies

    Methods for testing publication bias in ecological and evolutionary meta‐analyses

    No full text
    Nakagawa S, Lagisz M, Jennions MD, et al. Methods for testing publication bias in ecological and evolutionary meta‐analyses. Methods in Ecology and Evolution. 2021

    Publication bias impacts on effect size, statistical power, and magnitude (Type M) and sign (Type S) errors in ecology and evolutionary biology

    No full text
    Abstract Collaborative efforts to directly replicate empirical studies in the medical and social sciences have revealed alarmingly low rates of replicability, a phenomenon dubbed the ‘replication crisis’. Poor replicability has spurred cultural changes targeted at improving reliability in these disciplines. Given the absence of equivalent replication projects in ecology and evolutionary biology, two inter-related indicators offer the opportunity to retrospectively assess replicability: publication bias and statistical power. This registered report assesses the prevalence and severity of small-study (i.e., smaller studies reporting larger effect sizes) and decline effects (i.e., effect sizes decreasing over time) across ecology and evolutionary biology using 87 meta-analyses comprising 4,250 primary studies and 17,638 effect sizes. Further, we estimate how publication bias might distort the estimation of effect sizes, statistical power, and errors in magnitude (Type M or exaggeration ratio) and sign (Type S). We show strong evidence for the pervasiveness of both small-study and decline effects in ecology and evolution. There was widespread prevalence of publication bias that resulted in meta-analytic means being over-estimated by (at least) 0.12 standard deviations. The prevalence of publication bias distorted confidence in meta-analytic results, with 66% of initially statistically significant meta-analytic means becoming non-significant after correcting for publication bias. Ecological and evolutionary studies consistently had low statistical power (15%) with a 4-fold exaggeration of effects on average (Type M error rates = 4.4). Notably, publication bias reduced power from 23% to 15% and increased type M error rates from 2.7 to 4.4 because it creates a non-random sample of effect size evidence. The sign errors of effect sizes (Type S error) increased from 5% to 8% because of publication bias. Our research provides clear evidence that many published ecological and evolutionary findings are inflated. Our results highlight the importance of designing high-power empirical studies (e.g., via collaborative team science), promoting and encouraging replication studies, testing and correcting for publication bias in meta-analyses, and adopting open and transparent research practices, such as (pre)registration, data- and code-sharing, and transparent reporting
    corecore