105,732 research outputs found

    On the average uncertainty for systems with nonlinear coupling

    Full text link
    The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale, i.e. at the width of the distribution. The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Renyi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.Comment: 24 pages, including 4 figures and 1 tabl

    Weighted Version of Generalized Inverse Weibull Distribution

    Get PDF
    Weighted distributions are used in many fields, such as medicine, ecology, and reliability. A weighted version of the generalized inverse Weibull distribution, known as weighted generalized inverse Weibull distribution (WGIWD), is proposed. Basic properties including mode, moments, moment generating function, skewness, kurtosis, and Shannon’s entropy are studied. The usefulness of the new model was demonstrated by applying it to a real-life data set. The WGIWD fits better than its submodels, such as length biased generalized inverse Weibull (LGIW), generalized inverse Weibull (GIW), inverse Weibull (IW) and inverse exponential (IE) distributions

    Ergodic properties of quasi-Markovian generalized Langevin equations with configuration dependent noise and non-conservative force

    Full text link
    We discuss the ergodic properties of quasi-Markovian stochastic differential equations, providing general conditions that ensure existence and uniqueness of a smooth invariant distribution and exponential convergence of the evolution operator in suitably weighted LL^{\infty} spaces, which implies the validity of central limit theorem for the respective solution processes. The main new result is an ergodicity condition for the generalized Langevin equation with configuration-dependent noise and (non-)conservative force

    Study on the Problem of Estimation of Parameters of Generalized Exponential Distribution

    Get PDF
    Mudholkar and Srivastava, Freimer (1995) proposed three-parameter exponentiated Weibull distribution. Two-parameter exponentiated exponential or generalized exponential (GE) distribution is a particular member of the exponentiated Weibull distribution. we study the problem of estimation of unknown parameters of the GE distribution and describe the some estimation techniques which are very useful to estimate the unknown parameters of the GE distribution. We consider the methods of maximum likelihood estimator, moments estimator, percentiles estimator, least square estimator, weighted least square estimator

    Generalized Weibull and Inverse Weibull Distributions with Applications

    Get PDF
    In this thesis, new classes of Weibull and inverse Weibull distributions including the generalized new modified Weibull (GNMW), gamma-generalized inverse Weibull (GGIW), the weighted proportional inverse Weibull (WPIW) and inverse new modified Weibull (INMW) distributions are introduced. The GNMW contains several sub-models including the new modified Weibull (NMW), generalized modified Weibull (GMW), modified Weibull (MW), Weibull (W) and exponential (E) distributions, just to mention a few. The class of WPIW distributions contains several models such as: length-biased, hazard and reverse hazard proportional inverse Weibull, proportional inverse Weibull, inverse Weibull, inverse exponential, inverse Rayleigh, and Frechet distributions as special cases. Included in the GGIW distribution are the sub-models: gamma-generalized inverse Weibull, gamma-generalized Frechet, gamma-generalized inverse Rayleigh, gamma-generalized inverse exponential, inverse Weibull, inverse Rayleigh, inverse exponential, Frechet distributions. The INMW distribution contains several sub-models including inverse Weibull, inverse new modified exponential, inverse new modified Rayleigh, new modified Frechet, inverse modified Weibull, inverse Rayleigh and inverse exponential distributions as special cases. Properties of these distributions including the behavior of the hazard function, moments, coefficients of variation, skewness, and kurtosis, s-entropy, distribution of order statistic and Fisher information are presented. Estimates of the parameters of the models via method of maximum likelihood (ML) are presented. Extensive simulation study is conducted and numerical examples are given

    Improved inference for the generalized Pareto distribution under linear, power and exponential normalization

    Get PDF
    summary:We discuss three estimation methods: the method of moments, probability weighted moments, and L-moments for the scale parameter and the extreme value index in the generalized Pareto distribution under linear normalization. Moreover, we adapt these methods to use for the generalized Pareto distribution under power and exponential normalizations. A simulation study is conducted to compare the three methods on the three models and determine which is the best, which turned out to be the probability weighted moments. A new computational technique for improving fitting quality is proposed and tested on two real-world data sets using the probability weighted moments. We looked back at various maximal data sets that had previously been addressed in the literature and for which the generalized extreme value distribution under linear normalization had failed to adequately explain them. We use the suggested procedure to find good fits

    Stochastic Weighted Graphs: Flexible Model Specification and Simulation

    Get PDF
    In most domains of network analysis researchers consider networks that arise in nature with weighted edges. Such networks are routinely dichotomized in the interest of using available methods for statistical inference with networks. The generalized exponential random graph model (GERGM) is a recently proposed method used to simulate and model the edges of a weighted graph. The GERGM specifies a joint distribution for an exponential family of graphs with continuous-valued edge weights. However, current estimation algorithms for the GERGM only allow inference on a restricted family of model specifications. To address this issue, we develop a Metropolis--Hastings method that can be used to estimate any GERGM specification, thereby significantly extending the family of weighted graphs that can be modeled with the GERGM. We show that new flexible model specifications are capable of avoiding likelihood degeneracy and efficiently capturing network structure in applications where such models were not previously available. We demonstrate the utility of this new class of GERGMs through application to two real network data sets, and we further assess the effectiveness of our proposed methodology by simulating non-degenerate model specifications from the well-studied two-stars model. A working R version of the GERGM code is available in the supplement and will be incorporated in the gergm CRAN package.Comment: 33 pages, 6 figures. To appear in Social Network

    Partial Generalized Probability Weighted Moments for Exponentiated Exponential Distribution

    Full text link

    EP-GIG Priors and Applications in Bayesian Sparse Learning

    Full text link
    In this paper we propose a novel framework for the construction of sparsity-inducing priors. In particular, we define such priors as a mixture of exponential power distributions with a generalized inverse Gaussian density (EP-GIG). EP-GIG is a variant of generalized hyperbolic distributions, and the special cases include Gaussian scale mixtures and Laplace scale mixtures. Furthermore, Laplace scale mixtures can subserve a Bayesian framework for sparse learning with nonconvex penalization. The densities of EP-GIG can be explicitly expressed. Moreover, the corresponding posterior distribution also follows a generalized inverse Gaussian distribution. These properties lead us to EM algorithms for Bayesian sparse learning. We show that these algorithms bear an interesting resemblance to iteratively re-weighted 2\ell_2 or 1\ell_1 methods. In addition, we present two extensions for grouped variable selection and logistic regression.Comment: 33 pages, 10 figure
    corecore