640 research outputs found

    ESTIMATION AND ASYMPTOTIC THEORY FOR A NEW CLASS OF MIXTURE MODELS

    Get PDF
    In this paper a new model of mixture of distributions is proposed, where the mixing structure is determined by a smooth transition tree architecture. Models based on mixture of distributions are useful in order to approximate unknown conditional distributions of multivariate data. The tree structure yields a model that is simpler, and in some cases more interpretable, than previous proposals in the literature. Based on the Expectation-Maximization (EM) algorithm a quasi-maximum likelihood estimator is derived and its asymptotic properties are derived under mild regularity conditions. In addition, a specific-to-general model building strategy is proposed in order to avoid possible identification problems. Both the estimation procedure and the model building strategy are evaluated in a Monte Carlo experiment, which give strong support for the theory developed in small samples. The approximation capabilities of the model is also analyzed in a simulation experiment. Finally, two applications with real datasets are considered. KEYWORDS: Mixture models, smooth transition, EM algorithm, asymptotic properties, time series, conditional distribution.

    Local-global neural networks: a new approach for nonlinear time series modelling

    Get PDF
    In this paper, the Local Global Neural Networks model is proposed within the context of time series models. This formulation encompasses some already existing nonlinear models and also admits the Mixture of Experts approach. We place emphasis on the linear expert case and extensively discuss the theoretical aspects of the model: stationarity conditions, existence, consistency and asymptotic normality of the parameter estimates, and model identifiability. A model building strategy is also considered and the whole procedure is illustrated with two real time-series.neural networks, nonlinear models, time-series, model identifiability, parameter estimation, model building, sunspot number.

    Mixture of linear experts model for censored data: A novel approach with scale-mixture of normal distributions

    Full text link
    The classical mixture of linear experts (MoE) model is one of the widespread statistical frameworks for modeling, classification, and clustering of data. Built on the normality assumption of the error terms for mathematical and computational convenience, the classical MoE model has two challenges: 1) it is sensitive to atypical observations and outliers, and 2) it might produce misleading inferential results for censored data. The paper is then aimed to resolve these two challenges, simultaneously, by proposing a novel robust MoE model for model-based clustering and discriminant censored data with the scale-mixture of normal class of distributions for the unobserved error terms. Based on this novel model, we develop an analytical expectation-maximization (EM) type algorithm to obtain the maximum likelihood parameter estimates. Simulation studies are carried out to examine the performance, effectiveness, and robustness of the proposed methodology. Finally, real data is used to illustrate the superiority of the new model.Comment: 21 pages

    Modeling nonlinearities with mixtures-of-experts of time series models

    Get PDF
    We discuss a class of nonlinear models based on mixtures-of-experts of regressions of exponential family time series models, where the covariates include functions of lags of the dependent variable as well as external covariates. The discussion covers results on model identifiability, stochastic stability, parameter estimation via maximum likelihood estimation, and model selection via standard information criteria. Applications using real and simulated data are presented to illustrate how mixtures-of-experts of time series models can be employed both for data description, where the usual mixture structure based on an unobserved latent variable may be particularly important, as well as for prediction, where only the mixtures-of-experts flexibility matters
    corecore