991,674 research outputs found

    Jeffreys-prior penalty, finiteness and shrinkage in binomial-response generalized linear models

    Get PDF
    Penalization of the likelihood by Jeffreys' invariant prior, or by a positive power thereof, is shown to produce finite-valued maximum penalized likelihood estimates in a broad class of binomial generalized linear models. The class of models includes logistic regression, where the Jeffreys-prior penalty is known additionally to reduce the asymptotic bias of the maximum likelihood estimator; and also models with other commonly used link functions such as probit and log-log. Shrinkage towards equiprobability across observations, relative to the maximum likelihood estimator, is established theoretically and is studied through illustrative examples. Some implications of finiteness and shrinkage for inference are discussed, particularly when inference is based on Wald-type procedures. A widely applicable procedure is developed for computation of maximum penalized likelihood estimates, by using repeated maximum likelihood fits with iteratively adjusted binomial responses and totals. These theoretical results and methods underpin the increasingly widespread use of reduced-bias and similarly penalized binomial regression models in many applied fields

    Generalized Perceptual Linear Prediction (gPLP) Features for Animal Vocalization Analysis

    Get PDF
    A new feature extraction model, generalized perceptual linear prediction (gPLP), is developed to calculate a set of perceptually relevant features for digital signal analysis of animalvocalizations. The gPLP model is a generalized adaptation of the perceptual linear prediction model, popular in human speech processing, which incorporates perceptual information such as frequency warping and equal loudness normalization into the feature extraction process. Since such perceptual information is available for a number of animal species, this new approach integrates that information into a generalized model to extract perceptually relevant features for a particular species. To illustrate, qualitative and quantitative comparisons are made between the species-specific model, generalized perceptual linear prediction (gPLP), and the original PLP model using a set of vocalizations collected from captive African elephants (Loxodonta africana) and wild beluga whales (Delphinapterus leucas). The models that incorporate perceptional information outperform the original human-based models in both visualization and classification tasks

    Generalized linear mixing model accounting for endmember variability

    Full text link
    Endmember variability is an important factor for accurately unveiling vital information relating the pure materials and their distribution in hyperspectral images. Recently, the extended linear mixing model (ELMM) has been proposed as a modification of the linear mixing model (LMM) to consider endmember variability effects resulting mainly from illumination changes. In this paper, we further generalize the ELMM leading to a new model (GLMM) to account for more complex spectral distortions where different wavelength intervals can be affected unevenly. We also extend the existing methodology to jointly estimate the variability and the abundances for the GLMM. Simulations with real and synthetic data show that the unmixing process can benefit from the extra flexibility introduced by the GLMM

    A Note on the Identifiability of Generalized Linear Mixed Models

    Full text link
    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization.Comment: 9 pages, no figure

    Properties of linear integral equations related to the six-vertex model with disorder parameter

    Full text link
    One of the key steps in recent work on the correlation functions of the XXZ chain was to regularize the underlying six-vertex model by a disorder parameter α\alpha. For the regularized model it was shown that all static correlation functions are polynomials in only two functions. It was further shown that these two functions can be written as contour integrals involving the solutions of a certain type of linear and non-linear integral equations. The linear integral equations depend parametrically on α\alpha and generalize linear integral equations known from the study of the bulk thermodynamic properties of the model. In this note we consider the generalized dressed charge and a generalized magnetization density. We express the generalized dressed charge as a linear combination of two quotients of QQ-functions, the solutions of Baxter's tt-QQ-equation. With this result we give a new proof of a lemma on the asymptotics of the generalized magnetization density as a function of the spectral parameter.Comment: 10 pages, latex, needs ws-procs9x6.cls, dedicated to Prof. Tetsuji Miwa on the occasion of his 60th birthday; v2 minor correction

    Vector Approximate Message Passing for the Generalized Linear Model

    Full text link
    The generalized linear model (GLM), where a random vector x\boldsymbol{x} is observed through a noisy, possibly nonlinear, function of a linear transform output z=Ax\boldsymbol{z}=\boldsymbol{Ax}, arises in a range of applications such as robust regression, binary classification, quantized compressed sensing, phase retrieval, photon-limited imaging, and inference from neural spike trains. When A\boldsymbol{A} is large and i.i.d. Gaussian, the generalized approximate message passing (GAMP) algorithm is an efficient means of MAP or marginal inference, and its performance can be rigorously characterized by a scalar state evolution. For general A\boldsymbol{A}, though, GAMP can misbehave. Damping and sequential-updating help to robustify GAMP, but their effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for additive white Gaussian noise channels. VAMP extends AMP's guarantees from i.i.d. Gaussian A\boldsymbol{A} to the larger class of rotationally invariant A\boldsymbol{A}. In this paper, we show how VAMP can be extended to the GLM. Numerical experiments show that the proposed GLM-VAMP is much more robust to ill-conditioning in A\boldsymbol{A} than damped GAMP
    • …
    corecore