10,286 research outputs found

    Probabilistic methods for seasonal forecasting in a changing climate: Cox-type regression models

    Get PDF
    For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept to other positive variables of interest beyond the time domain. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Niño/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictor

    Psychiatric illness predicts poor outcome after surgery for hip fracture: a prospective cohort study

    Get PDF
    Background. Hip fracture is common in the elderly. Previous studies suggest that psychiatric illness is common and predicts poor outcome, but have methodological weaknesses. Further studies are required to address this important issue. Methods. We prospectively recruited 731 elderly participants with hip fracture in two Leeds hospitals. Psychiatric diagnosis was made within 5 days of surgery using the Geriatric Mental State schedule and other standardized instruments, and data on confounding factors was collected. Main study outcomes were length of hospital stay, and mortality over 6 months after fracture. Results. Fifty-five per cent of participants had cognitive impairment (dementia in 40% and delirium in 15%), 13% had a depressive disorder, 2% had alcohol misuse and 2% had other psychiatric diagnoses. Participants were likely to remain in hospital longer if they suffered from dementia, delirium or depression. The relative risks of mortality over 6 months after hip fracture were increased in dementia and delirium, but not in depression. Conclusions. Psychiatric illness is common after hip fracture, and has significant effects on important outcomes. This suggests a need for randomized, controlled trials of psychiatric interventions in the elderly hip fracture population

    A Comprehensive Analysis of Proportional Intensity-based Software Reliability Models with Covariates (New Developments on Mathematical Decision Making Under Uncertainty)

    Get PDF
    The black-box approach based on stochastic software reliability models is a simple methodology with only software fault data in order to describe the temporal behavior of fault-detection processes, but fails to incorporate some significant development metrics data observed in the development process. In this paper we develop proportional intensity-based software reliability models with time-dependent metrics, and propose a statistical framework to assess the software reliability with the timedependent covariate as well as the software fault data. The resulting models are similar to the usual proportional hazard model, but possess somewhat different covariate structure from the existing one. We compare these metricsbased software reliability models with eleven well-known non-homogeneous Poisson process models, which are the special cases of our models, and evaluate quantitatively the goodness-of-fit and prediction. As an important result, the accuracy on reliability assessment strongly depends on the kind of software metrics used for analysis and can be improved by incorporating the time-dependent metrics data in modeling

    Hazard rate models for early warranty issue detection using upstream supply chain information

    Get PDF
    This research presents a statistical methodology to construct an early automotive warranty issue detection model based on upstream supply chain information. This is contrary to extant methods that are mostly reactive and only rely on data available from the OEMs (original equipment manufacturers). For any upstream supply chain information with direct history from warranty claims, the research proposes hazard rate models to link upstream supply chain information as explanatory covariates for early detection of warranty issues. For any upstream supply chain information without direct warranty claims history, we introduce Bayesian hazard rate models to account for uncertainties of the explanatory covariates. In doing so, it improves both the accuracy of warranty issue detection as well as the lead time for detection. The proposed methodology is illustrated and validated using real-world data from a leading global Tier-one automotive supplier

    Software reliability: Repetitive run experimentation and modeling

    Get PDF
    A software experiment conducted with repetitive run sampling is reported. Independently generated input data was used to verify that interfailure times are very nearly exponentially distributed and to obtain good estimates of the failure rates of individual errors and demonstrate how widely they vary. This fact invalidates many of the popular software reliability models now in use. The log failure rate of interfailure time was nearly linear as a function of the number of errors corrected. A new model of software reliability is proposed that incorporates these observations

    Dynamic artificial neural network-based reliability considering operational context of assets

    Get PDF
    Postprint. 24 meses de embargo (Elsevier)Assets reliability is a key issue to consider in the maintenance management policy and given its importance several estimation methods and models have been proposed within the reliability engineering discipline. However, these models involve certain assumptions which are the source of different uncertainties inherent to the estimations. An important source of uncertainty is the operational context in which the assets operate and how it affects the different failures. Therefore, this paper contributes to the reduction of the uncertainty coming from the operational context with the proposal of a novel method and its validation through a case study. The proposed model specifically addresses changes in the operational context by implementing dynamic capabilities in a new conception of the Proportional Hazards Model. It also allows to model interactions among working environment variables as well as hidden phenomena thanks to the integration within the model of artificial neural network method
    corecore