18,055 research outputs found
Linear and Parallel Learning of Markov Random Fields
We introduce a new embarrassingly parallel parameter learning algorithm for
Markov random fields with untied parameters which is efficient for a large
class of practical models. Our algorithm parallelizes naturally over cliques
and, for graphs of bounded degree, its complexity is linear in the number of
cliques. Unlike its competitors, our algorithm is fully parallel and for
log-linear models it is also data efficient, requiring only the local
sufficient statistics of the data to estimate parameters
Newton based Stochastic Optimization using q-Gaussian Smoothed Functional Algorithms
We present the first q-Gaussian smoothed functional (SF) estimator of the
Hessian and the first Newton-based stochastic optimization algorithm that
estimates both the Hessian and the gradient of the objective function using
q-Gaussian perturbations. Our algorithm requires only two system simulations
(regardless of the parameter dimension) and estimates both the gradient and the
Hessian at each update epoch using these. We also present a proof of
convergence of the proposed algorithm. In a related recent work (Ghoshdastidar
et al., 2013), we presented gradient SF algorithms based on the q-Gaussian
perturbations. Our work extends prior work on smoothed functional algorithms by
generalizing the class of perturbation distributions as most distributions
reported in the literature for which SF algorithms are known to work and turn
out to be special cases of the q-Gaussian distribution. Besides studying the
convergence properties of our algorithm analytically, we also show the results
of several numerical simulations on a model of a queuing network, that
illustrate the significance of the proposed method. In particular, we observe
that our algorithm performs better in most cases, over a wide range of
q-values, in comparison to Newton SF algorithms with the Gaussian (Bhatnagar,
2007) and Cauchy perturbations, as well as the gradient q-Gaussian SF
algorithms (Ghoshdastidar et al., 2013).Comment: This is a longer of version of the paper with the same title accepted
in Automatic
Inference via low-dimensional couplings
We investigate the low-dimensional structure of deterministic transformations
between random variables, i.e., transport maps between probability measures. In
the context of statistics and machine learning, these transformations can be
used to couple a tractable "reference" measure (e.g., a standard Gaussian) with
a target measure of interest. Direct simulation from the desired measure can
then be achieved by pushing forward reference samples through the map. Yet
characterizing such a map---e.g., representing and evaluating it---grows
challenging in high dimensions. The central contribution of this paper is to
establish a link between the Markov properties of the target measure and the
existence of low-dimensional couplings, induced by transport maps that are
sparse and/or decomposable. Our analysis not only facilitates the construction
of transformations in high-dimensional settings, but also suggests new
inference methodologies for continuous non-Gaussian graphical models. For
instance, in the context of nonlinear state-space models, we describe new
variational algorithms for filtering, smoothing, and sequential parameter
inference. These algorithms can be understood as the natural
generalization---to the non-Gaussian case---of the square-root
Rauch-Tung-Striebel Gaussian smoother.Comment: 78 pages, 25 figure
On Graphical Models via Univariate Exponential Family Distributions
Undirected graphical models, or Markov networks, are a popular class of
statistical models, used in a wide variety of applications. Popular instances
of this class include Gaussian graphical models and Ising models. In many
settings, however, it might not be clear which subclass of graphical models to
use, particularly for non-Gaussian and non-categorical data. In this paper, we
consider a general sub-class of graphical models where the node-wise
conditional distributions arise from exponential families. This allows us to
derive multivariate graphical model distributions from univariate exponential
family distributions, such as the Poisson, negative binomial, and exponential
distributions. Our key contributions include a class of M-estimators to fit
these graphical model distributions; and rigorous statistical analysis showing
that these M-estimators recover the true graphical model structure exactly,
with high probability. We provide examples of genomic and proteomic networks
learned via instances of our class of graphical models derived from Poisson and
exponential distributions.Comment: Journal of Machine Learning Researc
Random Finite Set Theory and Optimal Control of Large Collaborative Swarms
Controlling large swarms of robotic agents has many challenges including, but
not limited to, computational complexity due to the number of agents,
uncertainty in the functionality of each agent in the swarm, and uncertainty in
the swarm's configuration. This work generalizes the swarm state using Random
Finite Set (RFS) theory and solves the control problem using Model Predictive
Control (MPC) to overcome the aforementioned challenges. Computationally
efficient solutions are obtained via the Iterative Linear Quadratic Regulator
(ILQR). Information divergence is used to define the distance between the swarm
RFS and the desired swarm configuration. Then, a stochastic optimal control
problem is formulated using a modified L2^2 distance. Simulation results using
MPC and ILQR show that swarm intensities converge to a target destination, and
the RFS control formulation can vary in the number of target destinations. ILQR
also provides a more computationally efficient solution to the RFS swarm
problem when compared to the MPC solution. Lastly, the RFS control solution is
applied to a spacecraft relative motion problem showing the viability for this
real-world scenario.Comment: arXiv admin note: text overlap with arXiv:1801.0731
- …