1,909 research outputs found

    Real-time prediction with U.K. monetary aggregates in the presence of model uncertainty

    Get PDF
    A popular account for the demise of the U.K.’s monetary targeting regime in the 1980s blames the fluctuating predictive relationships between broad money and inflation and real output growth. Yet ex post policy analysis based on heavily revised data suggests no fluctuations in the predictive content of money. In this paper, we investigate the predictive relationships for inflation and output growth using both real-time and heavily revised data. We consider a large set of recursively estimated vector autoregressive (VAR) and vector error correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. We use Bayesian model averaging (BMA) to demonstrate that real-time monetary policymakers faced considerable model uncertainty. The in-sample predictive content of money fluctuated during the 1980s as a result of data revisions in the presence of model uncertainty. This feature is only apparent with real-time data as heavily revised data obscure these fluctuations. Out-of-sample predictive evaluations rarely suggest that money matters for either inflation or real output. We conclude that both data revisions and model uncertainty contributed to the demise of the U.K.’s monetary targeting regime

    Statistical Modelling of Marine Fish Populations and Communities

    Get PDF
    Sustainable fisheries management require an understanding of the relationship between the adult population and the number of juveniles successfully added to that population each year. The process driving larval survival to enter a given stage of a fish population is highly variable and this pattern of variability reflects the strength of density-dependent mortality. Marine ecosystems are generally threatened by climate change and overfishing; the coupling of these two sources have encouraged scientists to develop end-to-end ecosystem models to study the interactions of organisms at different trophic levels and to understand their behaviours in response to climate change. Our understanding of this important and massively complex system has been constrained historically by the limited amount of data available. Recent technological advances are beginning to address this lack of data, but there is an urgent need for careful statistical methodology to synthesise this information and to make reliable predictions based upon it. In this thesis I developed methodologies specifically designed to interpret the patterns of variability in recruitment by accurately estimating the degree of heteroscedasticity in 90 published stock-recruitment datasets. To better estimate the accuracy of model parameters, I employed a Bayesian hierarchical modelling framework and applied this to multiple sets of fish populations with different model structures. Finally, I developed an end-to-end ecological model that takes into account biotic and abiotic factors, together with data on the fish communities, to assess the organisation of the marine ecosystem and to investigate the potential effects of weather or climate changes. The work developed within this thesis highlights the importance of statistical methods in estimating the patterns of variability and community structure in fish populations as well as describing the way organisms and environmental factors interact within an ecosystem

    Analysis of Heterogeneous Data Sources for Veterinary Syndromic Surveillance to Improve Public Health Response and Aid Decision Making

    Get PDF
    The standard technique of implementing veterinary syndromic surveillance (VSyS) is the detection of temporal or spatial anomalies in the occurrence of health incidents above a set threshold in an observed population using the Frequentist modelling approach. Most implementation of this technique also requires the removal of historical outbreaks from the datasets to construct baselines. Unfortunately, some challenges exist, such as data scarcity, delayed reporting of health incidents, and variable data availability from sources, which make the VSyS implementation and alarm interpretation difficult, particularly when quantifying surveillance risk with associated uncertainties. This problem indicates that alternate or improved techniques are required to interpret alarms when incorporating uncertainties and previous knowledge of health incidents into the model to inform decision-making. Such methods must be capable of retaining historical outbreaks to assess surveillance risk. In this research work, the Stochastic Quantitative Risk Assessment (SQRA) model was proposed and developed for detecting and quantifying the risk of disease outbreaks with associated uncertainties using the Bayesian probabilistic approach in PyMC3. A systematic and comparative evaluation of the available techniques was used to select the most appropriate method and software packages based on flexibility, efficiency, usability, ability to retain historical outbreaks, and the ease of developing a model in Python. The social media datasets (Twitter) were first applied to infer a possible disease outbreak incident with associated uncertainties. Then, the inferences were subsequently updated using datasets from the clinical and other healthcare sources to reduce uncertainties in the model and validate the outbreak. Therefore, the proposed SQRA model demonstrates an approach that uses the successive refinement of analysis of different data streams to define a changepoint signalling a disease outbreak. The SQRA model was tested and validated to show the method's effectiveness and reliability for differentiating and identifying risk regions with corresponding changepoints to interpret an ongoing disease outbreak incident. This demonstrates that a technique such as the SQRA method obtained through this research may aid in overcoming some of the difficulties identified in VSyS, such as data scarcity, delayed reporting, and variable availability of data from sources, ultimately contributing to science and practice

    Good, great, or lucky? Screening for firms with sustained superior performance using heavy-tailed priors

    Full text link
    This paper examines historical patterns of ROA (return on assets) for a cohort of 53,038 publicly traded firms across 93 countries, measured over the past 45 years. Our goal is to screen for firms whose ROA trajectories suggest that they have systematically outperformed their peer groups over time. Such a project faces at least three statistical difficulties: adjustment for relevant covariates, massive multiplicity, and longitudinal dependence. We conclude that, once these difficulties are taken into account, demonstrably superior performance appears to be quite rare. We compare our findings with other recent management studies on the same subject, and with the popular literature on corporate success. Our methodological contribution is to propose a new class of priors for use in large-scale simultaneous testing. These priors are based on the hypergeometric inverted-beta family, and have two main attractive features: heavy tails and computational tractability. The family is a four-parameter generalization of the normal/inverted-beta prior, and is the natural conjugate prior for shrinkage coefficients in a hierarchical normal model. Our results emphasize the usefulness of these heavy-tailed priors in large multiple-testing problems, as they have a mild rate of tail decay in the marginal likelihood m(y)m(y)---a property long recognized to be important in testing.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS512 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bayesian paired comparison with the bpcs package

    Get PDF
    This article introduces the bpcs R package (Bayesian Paired Comparison in Stan) and the statistical models implemented in the package. This package aims to facilitate the use of Bayesian models for paired comparison data in behavioral research. Bayesian analysis of paired comparison data allows parameter estimation even in conditions where the maximum likelihood does not exist, allows easy extension of paired comparison models, provides straightforward interpretation of the results with credible intervals, has better control of type I error, has more robust evidence towards the null hypothesis, allows propagation of uncertainties, includes prior information, and performs well when handling models with many parameters and latent variables. The bpcs package provides a consistent interface for R users and several functions to evaluate the posterior distribution of all parameters to estimate the posterior distribution of any contest between items and to obtain the posterior distribution of the ranks. Three reanalyses of recent studies that used the frequentist Bradley–Terry model are presented. These reanalyses are conducted with the Bayesian models of the bpcs package, and all the code used to fit the models, generate the figures, and the tables are available in the online appendix

    Drivers of divergent assessments of bisphenol-A hazards to semen quality by various European agencies, regulators and scientists

    Get PDF
    The downward revision of the bisphenol A (BPA) Health-based Guidance Value (HBGV) by the European Food Safety Authority (EFSA) has led to disagreements with other regulatory agencies, among them the German Federal Institute for Risk Assessment (BfR). The BfR has recently published an alternative Tolerable Daily Intake (TDI), 1000-times higher than the EFSA HBGV of 0.2 ng/kg/d. While the EFSA value is defined in relation to immunotoxicity, the BfR alternative TDI is based on declines in sperm counts resulting from exposures in adulthood. Earlier, we had used semen quality deteriorations to estimate a BPA Reference Dose (RfD) of 3 ng/kg/d for use in mixture risk assessments of male reproductive health. We derived this estimate from animal studies of gestational BPA exposures which both EFSA and BfR viewed as irrelevant for human hazard characterisations. Here, we identify factors that drive these diverging views. We find that the fragmented, endpoint-oriented study evaluation system used by EFSA and BfR, with its emphasis on data that can support dose-response analyses, has obscured the overall BPA effect pattern relevant to male reproductive effects. This has led to a disregard for the effects of gestational BPA exposures. We also identify problems with the study evaluation schemes used by EFSA and BfR which leads to the omission of entire streams of evidence from consideration. The main driver of the diverging views of EFSA and BfR is the refusal by BfR to accept immunotoxic effects as the basis for establishing an HBGV. We find that switching from immunotoxicity to declines in semen quality as the basis for deriving a BPA TDI by deterministic or probabilistic approaches produces values in the range of 2.4-6.6 ng/kg/d, closer to the present EFSA HBGV of 0.2 ng/kg/d than the BfR TDI of 200 ng/kg/d. The proposed alternative BfR value is the result of value judgements which erred on the side of disregarding evidence that could have supported a lower TDI. The choices made in terms of selecting key studies and methods for dose-response analyses produced a TDI that comes close to doses shown to produce effects on semen quality in animal studies and in human studies of adult BPA exposures
    • 

    corecore