911 research outputs found

    Discussion of Likelihood Inference for Models with Unobservables: Another View

    Full text link
    Discussion of "Likelihood Inference for Models with Unobservables: Another View" by Youngjo Lee and John A. Nelder [arXiv:1010.0303]Comment: Published in at http://dx.doi.org/10.1214/09-STS277A the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Estimating Stellar Parameters from Spectra using a Hierarchical Bayesian Approach

    Get PDF
    A method is developed for fitting theoretically predicted astronomical spectra to an observed spectrum. Using a hierarchical Bayesian principle, the method takes both systematic and statistical measurement errors into account, which has not been done before in the astronomical literature. The goal is to estimate fundamental stellar parameters and their associated uncertainties. The non-availability of a convenient deterministic relation between stellar parameters and the observed spectrum, combined with the computational complexities this entails, necessitate the curtailment of the continuous Bayesian model to a reduced model based on a grid of synthetic spectra. A criterion for model selection based on the so-called predictive squared error loss function is proposed, together with a measure for the goodness-of-fit between observed and synthetic spectra. The proposed method is applied to the infrared 2.38--2.60 \mic ISO-SWS data (Infrared Space Observatory - Short Wavelength Spectrometer) of the star α\alpha Bootis, yielding estimates for the stellar parameters: effective temperature \Teff = 4230 ±\pm 83 K, gravity log\log g = 1.50 ±\pm 0.15 dex, and metallicity [Fe/H] = 0.30±0.21-0.30 \pm 0.21 dex.Comment: 15 pages, 8 figures, 5 tables. Accepted for publication in MNRA

    Robust benchmark dose determination based on profile score methods.

    Get PDF
    We investigate several methods commonly used to obtain a benchmark dose and show that those based on full likelihood or profile likelihood methods might have severe shortcomings. We propose two new profile likelihood-based approaches which overcome these problems. Another contribution is the extension of the benchmark dose determination to non full likelihood models, such as quasi-likelihood, generalized estimating equations, which are widely used in settings such as developmental toxicity where clustered data are encountered. This widening of the scope of application is possible by the use of (robust) score statistics. Benchmark dose methods are applied to a data set from a developmental toxicity study.clustered binary data; generalized estimating equations; likelihood ratio; profile likelihood; score statistic; toxicology; clustered binary data; quantitative risk assessment; longitudinal data-analysis; generalized linear-models; developmental toxicity; likelihood; tests; misspecification; outcomes;

    Hierarchical models with normal and conjugate random effects : a review

    Get PDF
    Molenberghs, Verbeke, and Demétrio (2007) and Molenberghs et al. (2010) proposed a general framework to model hierarchical data subject to within-unit correlation and/or overdispersion. The framework extends classical overdispersion models as well as generalized linear mixed models. Subsequent work has examined various aspects that lead to the formulation of several extensions. A unified treatment of the model framework and key extensions is provided. Particular extensions discussed are: explicit calculation of correlation and other moment-based functions, joint modelling of several hierarchical sequences, versions with direct marginally interpretable parameters, zero-inflation in the count case, and influence diagnostics. The basic models and several extensions are illustrated using a set of key examples, one per data type (count, binary, multinomial, ordinal, and time-to-event)

    Bayesian models for weighted data with missing values: a bootstrap approach

    Get PDF
    Many data sets, especially from surveys, are made available to users with weights. Where the derivation of such weights is known, this information can often be incorporated in the user's substantive model (model of interest). When the derivation is unknown, the established procedure is to carry out a weighted analysis. However, with non‐trivial proportions of missing data this is inefficient and may be biased when data are not missing at random. Bayesian approaches provide a natural approach for the imputation of missing data, but it is unclear how to handle the weights. We propose a weighted bootstrap Markov chain Monte Carlo algorithm for estimation and inference. A simulation study shows that it has good inferential properties. We illustrate its utility with an analysis of data from the Millennium Cohort Study

    Data Representativeness: Issues and Solutions

    Get PDF
    In its control programmes on maximum residue level compliance and exposure assessments, EFSA requires the participating countries to submit results, from specific numbers of food item samples, analyzed in the countries. These data are used to obtain estimates such as the proportion of samples exceeding the maximum residue limits, and the mean and maximum residue concentration per food item to assess exposure. An important consideration is the design and analysis of the programmes. In this report, we combine elements of survey sampling methodology, and statistical modeling, as a benchmark framework for the programmes, starting from the translation of research questions into statistical problems, to the statistical analysis and interpretation. Particular focus is placed on the issues that could affect the representativeness of the data, and remedial procedures are proposed. For example, in the absence of information on the sampling design, a sensitivity analysis, across a range of designs, is proposed. On the other hand, weighted generalized linear mixed models, and generalized linear mixed models combining both conjugate and normal random effects, are proposed, to address selection bias. Likelihood-based analysis methods are also proposed to address missing and censored data problems. Suggestions for improvements in the design and analysis of the programmes are also identified and discussed. For instance, incorporation of stratified sampling methodology, in determining both the total number, and the allocation of samples to the participating countries, is proposed. All through the report, statistical analysis models which properly take into account the hierarchical (and thus correlated) structure in which the data are collected are proposed

    Modelling the neonatal system: A joint analysis of length of stay and patient pathways

    Get PDF
    © 2019 John Wiley & Sons, Ltd. This is the peer reviewed version of the following article: Modelling the neonatal system: A joint analysis of length of stay and patient pathways, which has been published on 27/11/2019 in final form at https://doi.org/10.1002/hpm.2928. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.In the United Kingdom, one in seven babies require specialist neonatal care after birth, with a noticeable increase in demand. Coupled with budgeting constraints and lack of investment means that neonatal units are struggling. This will inevitably have an impact on baby's length of stay (LoS) and the performance of the service. Models have previously been developed to capture individual babies' pathways to investigate the longitudinal cycle of care. However, no models have been developed to examine the joint analysis of LoS and babies' pathways. LoS at each stage of care is a critical driver of both the clinical outcomes and economic performance of the neonatal system. Using the generalized linear mixed modelling approach, extended to accommodate multiple outcomes, the association between neonate's pathway to discharge and LoS is examined. Using the data about 1002 neonates, we noticed that there is a high positive association between baby's pathway and total LoS, suggesting that discharge policies needs to be looked at more carefully. A novel statistical approach that examined the association of key outcomes and how it evolved over time is developed. Its applicability can be extended to other types of long-term care or diseases, such as heart failure and stroke.Peer reviewedFinal Accepted Versio

    Investigating the missing data mechanism in quality of life outcomes: a comparison of approaches

    Get PDF
    Background: Missing data is classified as missing completely at random (MCAR), missing at random (MAR) or missing not at random (MNAR). Knowing the mechanism is useful in identifying the most appropriate analysis. The first aim was to compare different methods for identifying this missing data mechanism to determine if they gave consistent conclusions. Secondly, to investigate whether the reminder-response data can be utilised to help identify the missing data mechanism. Methods: Five clinical trial datasets that employed a reminder system at follow-up were used. Some quality of life questionnaires were initially missing, but later recovered through reminders. Four methods of determining the missing data mechanism were applied. Two response data scenarios were considered. Firstly, immediate data only; secondly, all observed responses (including reminder-response). Results: In three of five trials the hypothesis tests found evidence against the MCAR assumption. Logistic regression suggested MAR, but was able to use the reminder-collected data to highlight potential MNAR data in two trials. Conclusion: The four methods were consistent in determining the missingness mechanism. One hypothesis test was preferred as it is applicable with intermittent missingness. Some inconsistencies between the two data scenarios were found. Ignoring the reminder data could potentially give a distorted view of the missingness mechanism. Utilising reminder data allowed the possibility of MNAR to be considered.The Chief Scientist Office of the Scottish Government Health Directorate. Research Training Fellowship (CZF/1/31

    Detection of gravity modes in the massive binary V380 Cyg from Kepler spacebased photometry and high-resolution spectroscopy

    Get PDF
    We report the discovery of low-amplitude gravity-mode oscillations in the massive binary star V380 Cyg, from 180 d of Kepler custom-aperture space photometry and 5 months of high-resolution high signal-to-noise spectroscopy. The new data are of unprecedented quality and allowed to improve the orbital and fundamental parameters for this binary. The orbital solution was subtracted from the photometric data and led to the detection of periodic intrinsic variability with frequencies of which some are multiples of the orbital frequency and others are not. Spectral disentangling allowed the detection of line-profile variability in the primary. With our discovery of intrinsic variability interpreted as gravity mode oscillations, V380 Cyg becomes an important laboratory for future seismic tuning of the near-core physics in massive B-type stars.Comment: 5 pages, 4 figures, 2 tables. Accepted for publication in MNRAS Letter
    corecore