1,236 research outputs found
Revisiting maximum-a-posteriori estimation in log-concave models
Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation
methodology in imaging sciences, where high dimensionality is often addressed
by using Bayesian models that are log-concave and whose posterior mode can be
computed efficiently by convex optimisation. Despite its success and wide
adoption, MAP estimation is not theoretically well understood yet. The
prevalent view in the community is that MAP estimation is not proper Bayesian
estimation in a decision-theoretic sense because it does not minimise a
meaningful expected loss function (unlike the minimum mean squared error (MMSE)
estimator that minimises the mean squared loss). This paper addresses this
theoretical gap by presenting a decision-theoretic derivation of MAP estimation
in Bayesian models that are log-concave. A main novelty is that our analysis is
based on differential geometry, and proceeds as follows. First, we use the
underlying convex geometry of the Bayesian model to induce a Riemannian
geometry on the parameter space. We then use differential geometry to identify
the so-called natural or canonical loss function to perform Bayesian point
estimation in that Riemannian manifold. For log-concave models, this canonical
loss is the Bregman divergence associated with the negative log posterior
density. We then show that the MAP estimator is the only Bayesian estimator
that minimises the expected canonical loss, and that the posterior mean or MMSE
estimator minimises the dual canonical loss. We also study the question of MAP
and MSSE estimation performance in large scales and establish a universal bound
on the expected canonical error as a function of dimension, offering new
insights into the good performance observed in convex problems. These results
provide a new understanding of MAP and MMSE estimation in log-concave settings,
and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science
Characterizing Distances of Networks on the Tensor Manifold
At the core of understanding dynamical systems is the ability to maintain and
control the systems behavior that includes notions of robustness,
heterogeneity, or regime-shift detection. Recently, to explore such functional
properties, a convenient representation has been to model such dynamical
systems as a weighted graph consisting of a finite, but very large number of
interacting agents. This said, there exists very limited relevant statistical
theory that is able cope with real-life data, i.e., how does perform analysis
and/or statistics over a family of networks as opposed to a specific network or
network-to-network variation. Here, we are interested in the analysis of
network families whereby each network represents a point on an underlying
statistical manifold. To do so, we explore the Riemannian structure of the
tensor manifold developed by Pennec previously applied to Diffusion Tensor
Imaging (DTI) towards the problem of network analysis. In particular, while
this note focuses on Pennec definition of geodesics amongst a family of
networks, we show how it lays the foundation for future work for developing
measures of network robustness for regime-shift detection. We conclude with
experiments highlighting the proposed distance on synthetic networks and an
application towards biological (stem-cell) systems.Comment: This paper is accepted at 8th International Conference on Complex
Networks 201
Entropy Transformer Networks: A Learning Approach via Tangent Bundle Data Manifold
This paper focuses on an accurate and fast interpolation approach for image
transformation employed in the design of CNN architectures. Standard Spatial
Transformer Networks (STNs) use bilinear or linear interpolation as their
interpolation, with unrealistic assumptions about the underlying data
distributions, which leads to poor performance under scale variations.
Moreover, STNs do not preserve the norm of gradients in propagation due to
their dependency on sparse neighboring pixels. To address this problem, a novel
Entropy STN (ESTN) is proposed that interpolates on the data manifold
distributions. In particular, random samples are generated for each pixel in
association with the tangent space of the data manifold and construct a linear
approximation of their intensity values with an entropy regularizer to compute
the transformer parameters. A simple yet effective technique is also proposed
to normalize the non-zero values of the convolution operation, to fine-tune the
layers for gradients' norm-regularization during training. Experiments on
challenging benchmarks show that the proposed ESTN can improve predictive
accuracy over a range of computer vision tasks, including image reconstruction,
and classification, while reducing the computational cost
Fast computation of multi-scale combustion systems
In the present work, we illustrate the process of constructing a simplified model for complex multi-scale combustion systems. To this end, reduced models of homogeneous ideal gas mixtures of methane and air are first obtained by the novel Relaxation Redistribution Method (RRM) and thereafter used for the extraction of all the missing variables in a reactive flow simulation with a global reaction mode
- …