20,896 research outputs found
High-Dimensional Regression with Gaussian Mixtures and Partially-Latent Response Variables
In this work we address the problem of approximating high-dimensional data
with a low-dimensional representation. We make the following contributions. We
propose an inverse regression method which exchanges the roles of input and
response, such that the low-dimensional variable becomes the regressor, and
which is tractable. We introduce a mixture of locally-linear probabilistic
mapping model that starts with estimating the parameters of inverse regression,
and follows with inferring closed-form solutions for the forward parameters of
the high-dimensional regression problem of interest. Moreover, we introduce a
partially-latent paradigm, such that the vector-valued response variable is
composed of both observed and latent entries, thus being able to deal with data
contaminated by experimental artifacts that cannot be explained with noise
models. The proposed probabilistic formulation could be viewed as a
latent-variable augmentation of regression. We devise expectation-maximization
(EM) procedures based on a data augmentation strategy which facilitates the
maximum-likelihood search over the model parameters. We propose two
augmentation schemes and we describe in detail the associated EM inference
procedures that may well be viewed as generalizations of a number of EM
regression, dimension reduction, and factor analysis algorithms. The proposed
framework is validated with both synthetic and real data. We provide
experimental evidence that our method outperforms several existing regression
techniques
Conditional Density Estimation with Dimensionality Reduction via Squared-Loss Conditional Entropy Minimization
Regression aims at estimating the conditional mean of output given input.
However, regression is not informative enough if the conditional density is
multimodal, heteroscedastic, and asymmetric. In such a case, estimating the
conditional density itself is preferable, but conditional density estimation
(CDE) is challenging in high-dimensional space. A naive approach to coping with
high-dimensionality is to first perform dimensionality reduction (DR) and then
execute CDE. However, such a two-step process does not perform well in practice
because the error incurred in the first DR step can be magnified in the second
CDE step. In this paper, we propose a novel single-shot procedure that performs
CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR
as the problem of minimizing a squared-loss variant of conditional entropy, and
this is solved via CDE. Thus, an additional CDE step is not needed after DR. We
demonstrate the usefulness of the proposed method through extensive experiments
on various datasets including humanoid robot transition and computer art
Recommended from our members
Improving "bag-of-keypoints" image categorisation: Generative Models and PDF-Kernels
In this paper we propose two distinct enhancements to the basic
''bag-of-keypoints" image categorisation scheme proposed in [4]. In this
approach images are represented as a variable sized set of local image
features (keypoints). Thus, we require machine learning tools which
can operate on sets of vectors. In [4] this is achieved by representing
the set as a histogram over bins found by k-means. We show how this
approach can be improved and generalised using Gaussian Mixture Models
(GMMs). Alternatively, the set of keypoints can be represented directly
as a probability density function, over which a kernel can be de ned. This
approach is shown to give state of the art categorisation performance
Aggregated Deep Local Features for Remote Sensing Image Retrieval
Remote Sensing Image Retrieval remains a challenging topic due to the special
nature of Remote Sensing Imagery. Such images contain various different
semantic objects, which clearly complicates the retrieval task. In this paper,
we present an image retrieval pipeline that uses attentive, local convolutional
features and aggregates them using the Vector of Locally Aggregated Descriptors
(VLAD) to produce a global descriptor. We study various system parameters such
as the multiplicative and additive attention mechanisms and descriptor
dimensionality. We propose a query expansion method that requires no external
inputs. Experiments demonstrate that even without training, the local
convolutional features and global representation outperform other systems.
After system tuning, we can achieve state-of-the-art or competitive results.
Furthermore, we observe that our query expansion method increases overall
system performance by about 3%, using only the top-three retrieved images.
Finally, we show how dimensionality reduction produces compact descriptors with
increased retrieval performance and fast retrieval computation times, e.g. 50%
faster than the current systems.Comment: Published in Remote Sensing. The first two authors have equal
contributio
Dimension Reduction by Mutual Information Discriminant Analysis
In the past few decades, researchers have proposed many discriminant analysis
(DA) algorithms for the study of high-dimensional data in a variety of
problems. Most DA algorithms for feature extraction are based on
transformations that simultaneously maximize the between-class scatter and
minimize the withinclass scatter matrices. This paper presents a novel DA
algorithm for feature extraction using mutual information (MI). However, it is
not always easy to obtain an accurate estimation for high-dimensional MI. In
this paper, we propose an efficient method for feature extraction that is based
on one-dimensional MI estimations. We will refer to this algorithm as mutual
information discriminant analysis (MIDA). The performance of this proposed
method was evaluated using UCI databases. The results indicate that MIDA
provides robust performance over different data sets with different
characteristics and that MIDA always performs better than, or at least
comparable to, the best performing algorithms.Comment: 13pages, 3 tables, International Journal of Artificial Intelligence &
Application
On the Renormalization Group Explanation of Universality
It is commonly claimed that the universality of critical phenomena is explained through particular applications of the renormalization group. This article has three aims: to clarify the structure of the explanation of universality, to discuss the physics of such RG explanations, and to examine the extent to which universality is thus explained. The derivation of critical exponents proceeds via a real-space or a field-theoretic approach to the RG. Building on work by Mainwood, this article argues that these approaches ought to be distinguished: while the field-theoretic approach explains universality, the real-space approach fails to provide an adequate explanation
- …