19,691 research outputs found

    The use of mixtures for dealing with non-normal regression errors

    Get PDF
    In many situations, the distribution of the error terms of a linear regression model departs significantly from normality. It is shown, through a simulation study, that an effective strategy to deal with these situations is fitting a regression model based on the assumption that the error terms follow a mixture of normal distributions. The main advantage, with respect to the usual approach based on the least-squares method is a greater precision of the parameter estimates and confidence intervals. For the parameter estimation we make use of the EM algorithm, while confidence intervals are constructed through a bootstrap method

    Modelling Background Noise in Finite Mixtures of Generalized Linear Regression Models

    Get PDF
    In this paper we show how only a few outliers can completely break down EM-estimation of mixtures of regression models. A simple, yet very effective way of dealing with this problem, is to use a component where all regression parameters are fixed to zero to model the background noise. This noise component can be easily defined for different types of generalized linear models, has a familiar interpretation as the empty regression model, and is not very sensitive with respect to its own parameters

    High-Dimensional Regression with Gaussian Mixtures and Partially-Latent Response Variables

    Get PDF
    In this work we address the problem of approximating high-dimensional data with a low-dimensional representation. We make the following contributions. We propose an inverse regression method which exchanges the roles of input and response, such that the low-dimensional variable becomes the regressor, and which is tractable. We introduce a mixture of locally-linear probabilistic mapping model that starts with estimating the parameters of inverse regression, and follows with inferring closed-form solutions for the forward parameters of the high-dimensional regression problem of interest. Moreover, we introduce a partially-latent paradigm, such that the vector-valued response variable is composed of both observed and latent entries, thus being able to deal with data contaminated by experimental artifacts that cannot be explained with noise models. The proposed probabilistic formulation could be viewed as a latent-variable augmentation of regression. We devise expectation-maximization (EM) procedures based on a data augmentation strategy which facilitates the maximum-likelihood search over the model parameters. We propose two augmentation schemes and we describe in detail the associated EM inference procedures that may well be viewed as generalizations of a number of EM regression, dimension reduction, and factor analysis algorithms. The proposed framework is validated with both synthetic and real data. We provide experimental evidence that our method outperforms several existing regression techniques
    corecore