4,258 research outputs found

    Coherent Predictions of Low Count Time Series

    Get PDF
    The application of traditional forecasting methods to discrete count data yields forecasts that are non-coherent. That is, such methods produce non-integer point and interval predictions which violate the restrictions on the sample space of the integer variable. This paper presents a methodology for producing coherent forecasts of low count time series. The forecasts are based on estimates of the p-step ahead predictive mass functions for a family of distributions nested in the integer-valued first-order autoregressive (INAR(1)) class. The predictive mass functions are constructed from convolutions of the unobserved components of the model, with uncertainty associated with both parameter values and model specifcation fully incorporated. The methodology is used to analyse two sets of Canadian wage loss claims data.Forecasting; Discrete Time Series; INAR(1); Bayesian Prediction; Bayesian Model Averaging.

    Relaxation Penalties and Priors for Plausible Modeling of Nonidentified Bias Sources

    Full text link
    In designed experiments and surveys, known laws or design feat ures provide checks on the most relevant aspects of a model and identify the target parameters. In contrast, in most observational studies in the health and social sciences, the primary study data do not identify and may not even bound target parameters. Discrepancies between target and analogous identified parameters (biases) are then of paramount concern, which forces a major shift in modeling strategies. Conventional approaches are based on conditional testing of equality constraints, which correspond to implausible point-mass priors. When these constraints are not identified by available data, however, no such testing is possible. In response, implausible constraints can be relaxed into penalty functions derived from plausible prior distributions. The resulting models can be fit within familiar full or partial likelihood frameworks. The absence of identification renders all analyses part of a sensitivity analysis. In this view, results from single models are merely examples of what might be plausibly inferred. Nonetheless, just one plausible inference may suffice to demonstrate inherent limitations of the data. Points are illustrated with misclassified data from a study of sudden infant death syndrome. Extensions to confounding, selection bias and more complex data structures are outlined.Comment: Published in at http://dx.doi.org/10.1214/09-STS291 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore