626 research outputs found
Symmetric measures via moments
Algebraic tools in statistics have recently been receiving special attention
and a number of interactions between algebraic geometry and computational
statistics have been rapidly developing. This paper presents another such
connection, namely, one between probabilistic models invariant under a finite
group of (non-singular) linear transformations and polynomials invariant under
the same group. Two specific aspects of the connection are discussed:
generalization of the (uniqueness part of the multivariate) problem of moments
and log-linear, or toric, modeling by expansion of invariant terms. A
distribution of minuscule subimages extracted from a large database of natural
images is analyzed to illustrate the above concepts.Comment: Published in at http://dx.doi.org/10.3150/07-BEJ6144 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Minimum Description Length Principle for Maximum Entropy Model Selection
Model selection is central to statistics, and many learning problems can be
formulated as model selection problems. In this paper, we treat the problem of
selecting a maximum entropy model given various feature subsets and their
moments, as a model selection problem, and present a minimum description length
(MDL) formulation to solve this problem. For this, we derive normalized maximum
likelihood (NML) codelength for these models. Furthermore, we prove that the
minimax entropy principle is a special case of maximum entropy model selection,
where one assumes that complexity of all the models are equal. We apply our
approach to gene selection problem and present simulation results.Comment: 9 pages, 3 figures, 4 tables, submitted to Uncertainty in Artificial
Intelligenc
Wasserstein Introspective Neural Networks
We present Wasserstein introspective neural networks (WINN) that are both a
generator and a discriminator within a single model. WINN provides a
significant improvement over the recent introspective neural networks (INN)
method by enhancing INN's generative modeling capability. WINN has three
interesting properties: (1) A mathematical connection between the formulation
of the INN algorithm and that of Wasserstein generative adversarial networks
(WGAN) is made. (2) The explicit adoption of the Wasserstein distance into INN
results in a large enhancement to INN, achieving compelling results even with a
single classifier --- e.g., providing nearly a 20 times reduction in model size
over INN for unsupervised generative modeling. (3) When applied to supervised
classification, WINN also gives rise to improved robustness against adversarial
examples in terms of the error reduction. In the experiments, we report
encouraging results on unsupervised learning problems including texture, face,
and object modeling, as well as a supervised classification task against
adversarial attacks.Comment: Accepted to CVPR 2018 (Oral
Recommended from our members
GRADE: Gibbs Reaction and Diffusion Equations
Recently there have been increasing interests in using nonlinear PDEs for applications in computer vision and image processing. In this paper, we propose a general statistical framework for designing a new class of PDEs. For a given application, a Markov random field model is learned according to the minimax entropy principle so that should characterize the ensemble of images in our application. is a Gibbs distribution whose energy terms can be divided into two categories. Subsequently the partial differential equations given by gradient descent on the Gibbs potential are essentially reaction-diffusion equations, where the energy terms in one category produce anisotropic diffusion while the inverted energy terms in the second category produce reaction associated with pattern formation. We call this new class of PDEs the Gibbs Reaction And Diffusion Equations-GRADE and we demonstrate experiments where GRADE are used for texture pattern formation, denoising, image enhancement, and clutter removal.Mathematic
A survey of exemplar-based texture synthesis
Exemplar-based texture synthesis is the process of generating, from an input
sample, new texture images of arbitrary size and which are perceptually
equivalent to the sample. The two main approaches are statistics-based methods
and patch re-arrangement methods. In the first class, a texture is
characterized by a statistical signature; then, a random sampling conditioned
to this signature produces genuinely different texture images. The second class
boils down to a clever "copy-paste" procedure, which stitches together large
regions of the sample. Hybrid methods try to combine ideas from both approaches
to avoid their hurdles. The recent approaches using convolutional neural
networks fit to this classification, some being statistical and others
performing patch re-arrangement in the feature space. They produce impressive
synthesis on various kinds of textures. Nevertheless, we found that most real
textures are organized at multiple scales, with global structures revealed at
coarse scales and highly varying details at finer ones. Thus, when confronted
with large natural images of textures the results of state-of-the-art methods
degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe
FRAME. New method presented: CNNMR
- …