33 research outputs found

    Point process modelling of coordinate-based meta-analysis neuroimaging data

    Get PDF
    Now over 25 years old, functional magnetic resonance imaging (fMRI) has made significant contributions in improving our understanding of the human brain function. However, some limitations of fMRI studies, including those associated with the small sample sizes that are typically employed, raise concerns about validity of the technique. Lately, growing interest has been observed in combining the results of multiple fMRI studies in a meta-analysis. This can potentially address the limitations of single experiments and raise opportunities for reaching safer conclusions. Coordinate-based meta-analyses (CBMA) use the peak activation locations from multiple studies to find areas of consistent activations across experiments. CBMA presents statisticians with many interesting challenges. Several issues have been solved but there are also many open problems. In this thesis, we review literature on the topic and after describing the unsolved problems we then attempt to address some of the most important. The first problem that we approach is the incorporation of study-specific characteristics in the meta-analysis model known as meta-regression. We propose an novel meta-regression model based on log-Gaussian Cox processes and develop a parameter estimation algorithm using the Hamiltonian Monte Carlo method. The second problem that we address is the use of CBMA data as prior in small underpowered fMRI studies. Based on some existing work on the topic, we develop a hierarchical model for fMRI studies that uses previous CBMA findings as a prior for the location of the effects. Finally, we discuss a classical problem of meta-analysis, the file drawer problem, where studies are suppressed from the literature because they fail to report any significant finding. We use truncated models to infer the total number of non-significant studies that are missing from a database. All our methods are tested on both simulated and real data

    The coordinate-based meta-analysis of neuroimaging data

    Get PDF
    Neuroimaging meta-analysis is an area of growing interest in statistics. The special characteristics of neuroimaging data render classical meta-analysis methods inapplicable and therefore new methods have been developed. We review existing methodologies, explaining the benefits and drawbacks of each. A demonstration on a real dataset of emotion studies is included. We discuss some still-open problems in the field to highlight the need for future research

    distinct: a novel approach to differential distribution analyses

    Get PDF
    We present distinct, a general method for differential analysis of full distributions that is well suited to applications on single-cell data, such as single-cell RNA sequencing and high-dimensional flow or mass cytometry data. High-throughput single-cell data reveal an unprecedented view of cell identity and allow complex variations between conditions to be discovered; nonetheless, most methods for differential expression target differences in the mean and struggle to identify changes where the mean is only marginally affected. distinct is based on a hierarchical non-parametric permutation ap- proach and, by comparing empirical cumulative distribution functions, iden- tifies both differential patterns involving changes in the mean, as well as more subtle variations that do not involve the mean. We performed extensive bench- marks across both simulated and experimental datasets from single-cell RNA sequencing and mass cytometry data, where distinct shows favourable per- formance, identifies more differential patterns than competitors, and displays good control of false positive and false discovery rates. distinct is available as a Bioconductor R package

    Bayesian log-Gaussian Cox process regression: applications to meta-analysis of neuroimaging working memory studies

    Full text link
    Working memory (WM) was one of the first cognitive processes studied with functional magnetic resonance imaging. With now over 20 years of studies on WM, each study with tiny sample sizes, there is a need for meta-analysis to identify the brain regions that are consistently activated by WM tasks, and to understand the interstudy variation in those activations. However, current methods in the field cannot fully account for the spatial nature of neuroimaging meta-analysis data or the heterogeneity observed among WM studies. In this work, we propose a fully Bayesian random-effects metaregression model based on log-Gaussian Cox processes, which can be used for meta-analysis of neuroimaging studies. An efficient Markov chain Monte Carlo scheme for posterior simulations is presented which makes use of some recent advances in parallel computing using graphics processing units. Application of the proposed model to a real data set provides valuable insights regarding the function of the WM

    Estimating the prevalence of missing experiments in a neuroimaging meta-analysis.

    Get PDF
    Coordinate-based meta-analyses (CBMA) allow researchers to combine the results from multiple functional magnetic resonance imaging experiments with the goal of obtaining results that are more likely to generalize. However, the interpretation of CBMA findings can be impaired by the file drawer problem, a type of publication bias that refers to experiments that are carried out but are not published. Using foci per contrast count data from the BrainMap database, we propose a zero-truncated modeling approach that allows us to estimate the prevalence of nonsignificant experiments. We validate our method with simulations and real coordinate data generated from the Human Connectome Project. Application of our method to the data from BrainMap provides evidence for the existence of a file drawer effect, with the rate of missing experiments estimated as at least 6 per 100 reported. The R code that we used is available at https://osf.io/ayhfv/

    Modelling the impact of the tier system on SARS-CoV-2 transmission in the UK between the first and second national lockdowns.

    Get PDF
    Funder: Community JameelOBJECTIVE: To measure the effects of the tier system on the COVID-19 pandemic in the UK between the first and second national lockdowns, before the emergence of the B.1.1.7 variant of concern. DESIGN: This is a modelling study combining estimates of real-time reproduction number Rt (derived from UK case, death and serological survey data) with publicly available data on regional non-pharmaceutical interventions. We fit a Bayesian hierarchical model with latent factors using these quantities to account for broader national trends in addition to subnational effects from tiers. SETTING: The UK at lower tier local authority (LTLA) level. 310 LTLAs were included in the analysis. PRIMARY AND SECONDARY OUTCOME MEASURES: Reduction in real-time reproduction number Rt . RESULTS: Nationally, transmission increased between July and late September, regional differences notwithstanding. Immediately prior to the introduction of the tier system, Rt averaged 1.3 (0.9-1.6) across LTLAs, but declined to an average of 1.1 (0.86-1.42) 2 weeks later. Decline in transmission was not solely attributable to tiers. Tier 1 had negligible effects. Tiers 2 and 3, respectively, reduced transmission by 6% (5%-7%) and 23% (21%-25%). 288 LTLAs (93%) would have begun to suppress their epidemics if every LTLA had gone into tier 3 by the second national lockdown, whereas only 90 (29%) did so in reality. CONCLUSIONS: The relatively small effect sizes found in this analysis demonstrate that interventions at least as stringent as tier 3 are required to suppress transmission, especially considering more transmissible variants, at least until effective vaccination is widespread or much greater population immunity has amassed

    Evaluating the population impact of hepatitis C direct acting antiviral treatment as prevention for people who inject drugs (EPIToPe) – a natural experiment (protocol)

    Get PDF
    Hepatitis C virus (HCV) is the second largest contributor to liver disease in the UK, with injecting drug use as the main risk factor among the estimated 200 000 people currently infected. Despite effective prevention interventions, chronic HCV prevalence remains around 40% among people who inject drugs (PWID). New direct-acting antiviral (DAA) HCV therapies comine high cure rates (>90%) and short treatment duration (8 to 12 weeks). Theoretical mathematical modelling evidence suggests HCV treatment scale-up can prevent transmission and substantially reduce HCV prevalence/incidence among PWID. Our primary aim is to generate empirical evidence on the effectiveness of HCV ‘Treatment as Prevention’ (TasP) in PWID

    HCV peers evaluation

    No full text

    Errors-in-variables CIM

    No full text

    Factor models for causal inference with panel data

    No full text
    corecore