2,183 research outputs found
Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines
Boltzmann machines (BMs) are a class of binary neural networks for which
there have been numerous proposed methods of estimation. Recently, it has been
shown that in the fully visible case of the BM, the method of maximum
pseudolikelihood estimation (MPLE) results in parameter estimates which are
consistent in the probabilistic sense. In this article, we investigate the
properties of MPLE for the fully visible BMs further, and prove that MPLE also
yields an asymptotically normal parameter estimator. These results can be used
to construct confidence intervals and to test statistical hypotheses. We
support our theoretical results by showing that the estimator behaves as
expected in a simulation study
Linear Mixed Models with Marginally Symmetric Nonparametric Random Effects
Linear mixed models (LMMs) are used as an important tool in the data analysis
of repeated measures and longitudinal studies. The most common form of LMMs
utilize a normal distribution to model the random effects. Such assumptions can
often lead to misspecification errors when the random effects are not normal.
One approach to remedy the misspecification errors is to utilize a point-mass
distribution to model the random effects; this is known as the nonparametric
maximum likelihood-fitted (NPML) model. The NPML model is flexible but requires
a large number of parameters to characterize the random-effects distribution.
It is often natural to assume that the random-effects distribution be at least
marginally symmetric. The marginally symmetric NPML (MSNPML) random-effects
model is introduced, which assumes a marginally symmetric point-mass
distribution for the random effects. Under the symmetry assumption, the MSNPML
model utilizes half the number of parameters to characterize the same number of
point masses as the NPML model; thus the model confers an advantage in economy
and parsimony. An EM-type algorithm is presented for the maximum likelihood
(ML) estimation of LMMs with MSNPML random effects; the algorithm is shown to
monotonically increase the log-likelihood and is proven to be convergent to a
stationary point of the log-likelihood function in the case of convergence.
Furthermore, it is shown that the ML estimator is consistent and asymptotically
normal under certain conditions, and the estimation of quantities such as the
random-effects covariance matrix and individual a posteriori expectations is
demonstrated
Maximum Likelihood Estimation of Triangular and Polygonal Distributions
Triangular distributions are a well-known class of distributions that are
often used as elementary example of a probability model. In the past,
enumeration and order statistic-based methods have been suggested for the
maximum likelihood (ML) estimation of such distributions. A novel
parametrization of triangular distributions is presented. The parametrization
allows for the construction of an MM (minorization--maximization) algorithm for
the ML estimation of triangular distributions. The algorithm is shown to both
monotonically increase the likelihood evaluations, and be globally convergent.
Using the parametrization is then applied to construct an MM algorithm for the
ML estimation of polygonal distributions. This algorithm is shown to have the
same numerical properties as that of the triangular distribution. Numerical
simulation are provided to demonstrate the performances of the new algorithms
against established enumeration and order statistics-based methods
Iteratively-Reweighted Least-Squares Fitting of Support Vector Machines: A Majorization--Minimization Algorithm Approach
Support vector machines (SVMs) are an important tool in modern data analysis.
Traditionally, support vector machines have been fitted via quadratic
programming, either using purpose-built or off-the-shelf algorithms. We present
an alternative approach to SVM fitting via the majorization--minimization (MM)
paradigm. Algorithms that are derived via MM algorithm constructions can be
shown to monotonically decrease their objectives at each iteration, as well as
be globally convergent to stationary points. We demonstrate the construction of
iteratively-reweighted least-squares (IRLS) algorithms, via the MM paradigm,
for SVM risk minimization problems involving the hinge, least-square,
squared-hinge, and logistic losses, and 1-norm, 2-norm, and elastic net
penalizations. Successful implementations of our algorithms are presented via
some numerical examples
- …