5,537 research outputs found

    Linear Mixed Models with Marginally Symmetric Nonparametric Random Effects

    Full text link
    Linear mixed models (LMMs) are used as an important tool in the data analysis of repeated measures and longitudinal studies. The most common form of LMMs utilize a normal distribution to model the random effects. Such assumptions can often lead to misspecification errors when the random effects are not normal. One approach to remedy the misspecification errors is to utilize a point-mass distribution to model the random effects; this is known as the nonparametric maximum likelihood-fitted (NPML) model. The NPML model is flexible but requires a large number of parameters to characterize the random-effects distribution. It is often natural to assume that the random-effects distribution be at least marginally symmetric. The marginally symmetric NPML (MSNPML) random-effects model is introduced, which assumes a marginally symmetric point-mass distribution for the random effects. Under the symmetry assumption, the MSNPML model utilizes half the number of parameters to characterize the same number of point masses as the NPML model; thus the model confers an advantage in economy and parsimony. An EM-type algorithm is presented for the maximum likelihood (ML) estimation of LMMs with MSNPML random effects; the algorithm is shown to monotonically increase the log-likelihood and is proven to be convergent to a stationary point of the log-likelihood function in the case of convergence. Furthermore, it is shown that the ML estimator is consistent and asymptotically normal under certain conditions, and the estimation of quantities such as the random-effects covariance matrix and individual a posteriori expectations is demonstrated

    Iteratively-Reweighted Least-Squares Fitting of Support Vector Machines: A Majorization--Minimization Algorithm Approach

    Full text link
    Support vector machines (SVMs) are an important tool in modern data analysis. Traditionally, support vector machines have been fitted via quadratic programming, either using purpose-built or off-the-shelf algorithms. We present an alternative approach to SVM fitting via the majorization--minimization (MM) paradigm. Algorithms that are derived via MM algorithm constructions can be shown to monotonically decrease their objectives at each iteration, as well as be globally convergent to stationary points. We demonstrate the construction of iteratively-reweighted least-squares (IRLS) algorithms, via the MM paradigm, for SVM risk minimization problems involving the hinge, least-square, squared-hinge, and logistic losses, and 1-norm, 2-norm, and elastic net penalizations. Successful implementations of our algorithms are presented via some numerical examples

    Maximum Likelihood Estimation of Triangular and Polygonal Distributions

    Full text link
    Triangular distributions are a well-known class of distributions that are often used as elementary example of a probability model. In the past, enumeration and order statistic-based methods have been suggested for the maximum likelihood (ML) estimation of such distributions. A novel parametrization of triangular distributions is presented. The parametrization allows for the construction of an MM (minorization--maximization) algorithm for the ML estimation of triangular distributions. The algorithm is shown to both monotonically increase the likelihood evaluations, and be globally convergent. Using the parametrization is then applied to construct an MM algorithm for the ML estimation of polygonal distributions. This algorithm is shown to have the same numerical properties as that of the triangular distribution. Numerical simulation are provided to demonstrate the performances of the new algorithms against established enumeration and order statistics-based methods

    Higher Order Effects in the Dielectric Constant of Percolative Metal-Insulator Systems above the Critical Point

    Full text link
    The dielectric constant of a conductor-insulator mixture shows a pronounced maximum above the critical volume concentration. Further experimental evidence is presented as well as a theoretical consideration based on a phenomenological equation. Explicit expressions are given for the position of the maximum in terms of scaling parameters and the (complex) conductances of the conductor and insulator. In order to fit some of the data, a volume fraction dependent expression for the conductivity of the more highly conductive component is introduced.Comment: 4 pages, Latex, 4 postscript (*.epsi) files submitted to Phys Rev.

    Mixtures of Spatial Spline Regressions

    Full text link
    We present an extension of the functional data analysis framework for univariate functions to the analysis of surfaces: functions of two variables. The spatial spline regression (SSR) approach developed can be used to model surfaces that are sampled over a rectangular domain. Furthermore, combining SSR with linear mixed effects models (LMM) allows for the analysis of populations of surfaces, and combining the joint SSR-LMM method with finite mixture models allows for the analysis of populations of surfaces with sub-family structures. Through the mixtures of spatial splines regressions (MSSR) approach developed, we present methodologies for clustering surfaces into sub-families, and for performing surface-based discriminant analysis. The effectiveness of our methodologies, as well as the modeling capabilities of the SSR model are assessed through an application to handwritten character recognition

    A Block Minorization--Maximization Algorithm for Heteroscedastic Regression

    Full text link
    The computation of the maximum likelihood (ML) estimator for heteroscedastic regression models is considered. The traditional Newton algorithms for the problem require matrix multiplications and inversions, which are bottlenecks in modern Big Data contexts. A new Big Data-appropriate minorization--maximization (MM) algorithm is considered for the computation of the ML estimator. The MM algorithm is proved to generate monotonically increasing sequences of likelihood values and to be convergent to a stationary point of the log-likelihood function. A distributed and parallel implementation of the MM algorithm is presented and the MM algorithm is shown to have differing time complexity to the Newton algorithm. Simulation studies demonstrate that the MM algorithm improves upon the computation time of the Newton algorithm in some practical scenarios where the number of observations is large

    Approximation by finite mixtures of continuous density functions that vanish at infinity

    Full text link
    Given sufficiently many components, it is often cited that finite mixture models can approximate any other probability density function (pdf) to an arbitrary degree of accuracy. Unfortunately, the nature of this approximation result is often left unclear. We prove that finite mixture models constructed from pdfs in C0\mathcal{C}_{0} can be used to conduct approximation of various classes of approximands in a number of different modes. That is, we prove approximands in C0\mathcal{C}_{0} can be uniformly approximated, approximands in Cb\mathcal{C}_{b} can be uniformly approximated on compact sets, and approximands in Lp\mathcal{L}_{p} can be approximated with respect to the Lp\mathcal{L}_{p}, for p∈[1,∞)p\in\left[1,\infty\right). Furthermore, we also prove that measurable functions can be approximated, almost everywhere
    • …
    corecore