1,174 research outputs found

    A bivariate count model with discrete Weibull margins

    Get PDF
    Multivariate discrete data arise in many fields (statistical quality control, epidemiology, failure and reliability analysis, etc.) and modelling such data is a relevant task. Here we consider the construction of a bivariate model with discrete Weibull margins, based on Farlie-Gumbel-Morgenstern copula, analyse its properties especially in terms of attainable correlation, and propose several methods for the point estimation of its parameters. Two of them are the standard one-step and two-step maximum likelihood procedures; the other two are based on an approximate method of moments and on the method of proportion, which represent intuitive alternatives for estimating the dependence parameter. A Monte Carlo simulation study is presented, comprising more than one hundred artificial settings, which empirically assesses the performance of the different estimation techniques in terms of statistical properties and computational cost. For illustrative purposes, the model and related inferential procedures are fitted and applied to two datasets taken from the literature, concerning failure data, presenting either positive or negative correlation between the two observed variables. The applications show that the proposed bivariate discrete Weibull distribution can model correlated counts even better than existing and well-established joint distributions

    New extensions of Rayleigh distribution based on inverted-Weibull and Weibull distributions

    Get PDF
    The Rayleigh distribution was proposed in the fields of acoustics and optics by lord Rayleigh. It has wide applications in communication theory, such as description of instantaneous peak power of received radio signals, i.e. study of vibrations and waves. It has also been used for modeling of wave propagation, radiation, synthetic aperture radar images, and lifetime data in engineering and clinical studies. This work proposes two new extensions of the Rayleigh distribution, namely the Rayleigh inverted-Weibull (RIW) and the Rayleigh Weibull (RW) distributions. Several fundamental properties are derived in this study, these include reliability and hazard functions, moments, quantile function, random number generation, skewness, and kurtosis. The maximum likelihood estimators for the model parameters of the two proposed models are also derived along with the asymptotic confidence intervals. Two real data sets in communication systems and clinical trials are analyzed to illustrate the concept of the proposed extensions. The results demonstrated that the proposed extensions showed better fitting than other extensions and competing models

    Warranty Data Analysis: A Review

    Get PDF
    Warranty claims and supplementary data contain useful information about product quality and reliability. Analysing such data can therefore be of benefit to manufacturers in identifying early warnings of abnormalities in their products, providing useful information about failure modes to aid design modification, estimating product reliability for deciding on warranty policy and forecasting future warranty claims needed for preparing fiscal plans. In the last two decades, considerable research has been conducted in warranty data analysis (WDA) from several different perspectives. This article attempts to summarise and review the research and developments in WDA with emphasis on models, methods and applications. It concludes with a brief discussion on current practices and possible future trends in WDA

    Symmetric and Asymmetric Distributions

    Get PDF
    In recent years, the advances and abilities of computer software have substantially increased the number of scientific publications that seek to introduce new probabilistic modelling frameworks, including continuous and discrete approaches, and univariate and multivariate models. Many of these theoretical and applied statistical works are related to distributions that try to break the symmetry of the normal distribution and other similar symmetric models, mainly using Azzalini's scheme. This strategy uses a symmetric distribution as a baseline case, then an extra parameter is added to the parent model to control the skewness of the new family of probability distributions. The most widespread and popular model is the one based on the normal distribution that produces the skewed normal distribution. In this Special Issue on symmetric and asymmetric distributions, works related to this topic are presented, as well as theoretical and applied proposals that have connections with and implications for this topic. Immediate applications of this line of work include different scenarios such as economics, environmental sciences, biometrics, engineering, health, etc. This Special Issue comprises nine works that follow this methodology derived using a simple process while retaining the rigor that the subject deserves. Readers of this Issue will surely find future lines of work that will enable them to achieve fruitful research results

    Validation of three new measure-correlate-predict models for the long-term prospection of the wind resource

    Get PDF
    The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied

    A Review of Probabilistic Methods of Assessment of Load Effects in Bridges

    Get PDF
    This paper reviews a range of statistical approaches to illustrate the influence of data quality and quantity on the probabilistic modelling of traffic load effects. It also aims to demonstrate the importance of long-run simulations in calculating characteristic traffic load effects. The popular methods of Peaks Over Threshold and Generalized Extreme Value are considered but also other methods including the Box-Cox approach, fitting to a Normal distribution and the Rice formula. For these five methods, curves are fitted to the tails of the daily maximum data. Bayesian Updating and Predictive Likelihood are also assessed, which require the entire data for fittings. The accuracy of each method in calculating 75-year characteristic values and probability of failure, using different quantities of data, is assessed. The nature of the problem is first introduced by a simple numerical example with a known theoretical answer. It is then extended to more realistic problems, where long-run simulations are used to provide benchmark results, against which each method is compared. Increasing the number of data in the sample results in higher accuracy of approximations but it is not able to completely eliminate the uncertainty associated with the extrapolation. Results also show that the accuracy of estimations of characteristic value and probabilities of failure are more a function of data quality than extrapolation technique. This highlights the importance of long-run simulations as a means of reducing the errors associated with the extrapolation process

    Asymmetric multivariate normal mixture GARCH

    Get PDF
    An asymmetric multivariate generalization of the recently proposed class of normal mixture GARCH models is developed. Issues of parametrization and estimation are discussed. Conditions for covariance stationarity and the existence of the fourth moment are derived, and expressions for the dynamic correlation structure of the process are provided. In an application to stock market returns, it is shown that the disaggregation of the conditional (co)variance process generated by the model provides substantial intuition. Moreover, the model exhibits a strong performance in calculating out–of–sample Value–at–Risk measures
    corecore