254 research outputs found

    An effective likelihood-free approximate computing method with statistical inferential guarantees

    Get PDF
    Approximate Bayesian computing is a powerful likelihood-free method that has grown increasingly popular since early applications in population genetics. However, complications arise in the theoretical justification for Bayesian inference conducted from this method with a non-sufficient summary statistic. In this paper, we seek to re-frame approximate Bayesian computing within a frequentist context and justify its performance by standards set on the frequency coverage rate. In doing so, we develop a new computational technique called approximate confidence distribution computing, yielding theoretical support for the use of non-sufficient summary statistics in likelihood-free methods. Furthermore, we demonstrate that approximate confidence distribution computing extends the scope of approximate Bayesian computing to include data-dependent priors without damaging the inferential integrity. This data-dependent prior can be viewed as an initial `distribution estimate' of the target parameter which is updated with the results of the approximate confidence distribution computing method. A general strategy for constructing an appropriate data-dependent prior is also discussed and is shown to often increase the computing speed while maintaining statistical inferential guarantees. We supplement the theory with simulation studies illustrating the benefits of the proposed method, namely the potential for broader applications and the increased computing speed compared to the standard approximate Bayesian computing methods

    An Exploration Of Parameter Duality In Statistical Inference

    Get PDF
    Well-known debates among statistical inferential paradigms emerge from conflicting views on the notion of probability. One dominant view understands probability as a representation of sampling variability; another prominent view understands probability as a measure of belief. The former generally describes model parameters as fixed values, in contrast to the latter. We propose that there are actually two versions of a parameter within both paradigms: a fixed unknown value that generated the data and a random version to describe the uncertainty in estimating the unknown value. An inferential approach based on CDs deciphers seemingly conflicting perspectives on parameters and probabilities

    Bridging Bayesian, frequentist and fiducial (BFF) inferences using confidence distribution

    Full text link
    Bayesian, frequentist and fiducial (BFF) inferences are much more congruous than they have been perceived historically in the scientific community (cf., Reid and Cox 2015; Kass 2011; Efron 1998). Most practitioners are probably more familiar with the two dominant statistical inferential paradigms, Bayesian inference and frequentist inference. The third, lesser known fiducial inference paradigm was pioneered by R.A. Fisher in an attempt to define an inversion procedure for inference as an alternative to Bayes' theorem. Although each paradigm has its own strengths and limitations subject to their different philosophical underpinnings, this article intends to bridge these different inferential methodologies through the lenses of confidence distribution theory and Monte-Carlo simulation procedures. This article attempts to understand how these three distinct paradigms, Bayesian, frequentist, and fiducial inference, can be unified and compared on a foundational level, thereby increasing the range of possible techniques available to both statistical theorists and practitioners across all fields.Comment: 30 pages, 5 figures, Handbook on Bayesian Fiducial and Frequentist (BFF) Inference

    Barriers and Enablers of Interdisciplinary Research at Academic Institutions

    Get PDF
    This research study examines the factors that motivate and lead to the success of faculty members who conduct interdisciplinary research. Because a comprehensive study of the research patterns of interdisciplinary researchers has not been conducted, the main intent of this research project was to create an instrument that would measure research habits and attitudes. It is important that such research be conducted using individuals who are interdisciplinary researchers as well as disciplinary researchers. One intent of the research study was to provide comparisons between disciplinary researchers and interdisciplinary researchers. Another intent was to provide university administrators with a better understanding of the factors that motivate and lead to the success of interdisciplinary researchers so that they could make policies that would support and encourage interdisciplinary research at their institution. A national survey was conducted to test the reliability and validity of a research instrument designed to examine different factors that were illuminated in a literature review and focus group study: administrative financial support, graduate training, team work and disciplinary affinity. Demographic data were also examined to determine if there were specific characteristics of interdisciplinary researchers that administrators would benefit from understanding. Purposeful sampling was conducted so that both interdisciplinary and disciplinary researchers were surveyed. This strategy was used so that comparisons between the two groups could be made. No differences were found between the different types of researchers on factors that lead to the success of or motivate faculty to conduct interdisciplinary research. An important finding of the research is that there were no significant differences between the demographic characteristics of individuals who conduct interdisciplinary research and those who do not. This finding is contrary to what is found in the literature. Because of this, administrators cannot make assumptions that an individual faculty member will conduct interdisciplinary research based on presumed demographic characteristics such as race, ethnicity, age or gender. An additional important finding of the research study is that there were no correlations between whether individuals who identified themselves as conducting applied or basic research and how interdisciplinary their research was. This is an important finding because, like demographic characteristics, the literature suggests that interdisciplinary researchers tend to be more applied in their research focus than disciplinary researchers

    Approximate Confidence Distribution Computing

    Get PDF
    Approximate confidence distribution computing (ACDC) offers a new take on the rapidly developing field of likelihood-free inference from within a frequentist framework. The appeal of this computational method for statistical inference hinges upon the concept of a confidence distribution, a special type of estimator which is defined with respect to the repeated sampling principle. An ACDC method provides frequentist validation for computational inference in problems with unknown or intractable likelihoods. The main theoretical contribution of this work is the identification of a matching condition necessary for frequentist validity of inference from this method. In addition to providing an example of how a modern understanding of confidence distribution theory can be used to connect Bayesian and frequentist inferential paradigms, we present a case to expand the current scope of so-called approximate Bayesian inference to include non-Bayesian inference by targeting a confidence distribution rather than a posterior. The main practical contribution of this work is the development of a data-driven approach to drive ACDC in both Bayesian or frequentist contexts. The ACDC algorithm is data-driven by the selection of a data-dependent proposal function, the structure of which is quite general and adaptable to many settings. We explore three numerical examples that both verify the theoretical arguments in the development of ACDC and suggest instances in which ACDC outperform approximate Bayesian computing methods computationally

    The Mastery Rubric for Statistics and Data Science: promoting coherence and consistency in data science education and training

    Full text link
    Consensus based publications of both competencies and undergraduate curriculum guidance documents targeting data science instruction for higher education have recently been published. Recommendations for curriculum features from diverse sources may not result in consistent training across programs. A Mastery Rubric was developed that prioritizes the promotion and documentation of formal growth as well as the development of independence needed for the 13 requisite knowledge, skills, and abilities for professional practice in statistics and data science, SDS. The Mastery Rubric, MR, driven curriculum can emphasize computation, statistics, or a third discipline in which the other would be deployed or, all three can be featured. The MR SDS supports each of these program structures while promoting consistency with international, consensus based, curricular recommendations for statistics and data science, and allows 'statistics', 'data science', and 'statistics and data science' curricula to consistently educate students with a focus on increasing learners independence. The Mastery Rubric construct integrates findings from the learning sciences, cognitive and educational psychology, to support teachers and students through the learning enterprise. The MR SDS will support higher education as well as the interests of business, government, and academic work force development, bringing a consistent framework to address challenges that exist for a domain that is claimed to be both an independent discipline and part of other disciplines, including computer science, engineering, and statistics. The MR-SDS can be used for development or revision of an evaluable curriculum that will reliably support the preparation of early e.g., undergraduate degree programs, middle e.g., upskilling and training programs, and late e.g., doctoral level training practitioners.Comment: 40 pages; 2 Tables; 4 Figures. Presented at the Symposium on Data Science & Statistics (SDSS) 202

    A Report on the Application of the European Convention on Human Rights Act 2003 and the European Charter of Fundamental Rights: Evaluation and Review

    Get PDF
    This project explores the extent that the European Convention on Human Rights (the Convention), the European Convention on Human Rights Act 2003 (the ECHR Act), and the European Charter of Fundamental Rights (the Charter) have been utilised before Irish courts and specified tribunals. The remit of this research report explores rights under these instruments that have been: Utilised in argument before Irish Superior Courts and specified tribunals, with a clear identification of the areas of law at issue, and the precise right under the ECHR Act, the Convention and the Charter, that has been argued and/or considered; Relied upon by domestic courts and tribunals in coming to their decisions; Interpreted in light of Ireland's constitutional framework.

    Towards Statistical Best Practices For Gender And Sex Data

    Get PDF
    Suzanne Thornton, Dooti Roy, Stephen Parry, Donna LaLonde, Wendy Martinez, Renee Ellis and David Corliss call for a more inclusive – and informative – approach to collecting data on human gender and sex

    The effect of intellectual ability on functional activation in a neurodevelopmental disorder: preliminary evidence from multiple fMRI studies in Williams syndrome

    Get PDF
    BACKGROUND: Williams syndrome (WS) is a rare genetic disorder caused by the deletion of approximately 25 genes at 7q11.23 that involves mild to moderate intellectual disability (ID). When using functional magnetic resonance imaging (fMRI) to compare individuals with ID to typically developing individuals, there is a possibility that differences in IQ contribute to between-group differences in BOLD signal. If IQ is correlated with BOLD signal, then group-level analyses should adjust for IQ, or else IQ should be matched between groups. If, however, IQ is not correlated with BOLD signal, no such adjustment or criteria for matching (and exclusion) based on IQ is necessary. METHODS: In this study, we aimed to test this hypothesis systematically using four extant fMRI datasets in WS. Participants included 29 adult subjects with WS (17 men) demonstrating a wide range of standardized IQ scores (composite IQ mean = 67, SD = 17.2). We extracted average BOLD activation for both cognitive and task-specific anatomically defined regions of interest (ROIs) in each individual and correlated BOLD with composite IQ scores, verbal IQ scores and non-verbal IQ scores in Spearman rank correlation tests. RESULTS: Of the 312 correlations performed, only six correlations (2%) in four ROIs reached statistical significance at a P value < 0.01, but none survived correction for multiple testing. All six correlations were positive. Therefore, none supports the hypothesis that IQ is negatively correlated with BOLD response. CONCLUSIONS: These data suggest that the inclusion of subjects with below normal IQ does not introduce a confounding factor, at least for some types of fMRI studies with low cognitive load. By including subjects who are representative of IQ range for the targeted disorder, findings are more likely to generalize to that population
    corecore