6,432 research outputs found
Noisy Optimization Complexity Under Locality Assumption
International audienceIn spite of various recent publications on the subject, there are still gaps between upper and lower bounds in evolutionary optimization for noisy objective function. In this paper we reduce the gap, and get tight bounds within logarithmic factors in the case of small noise and no long-distance influence on the objective function
Semi-Supervised Sound Source Localization Based on Manifold Regularization
Conventional speaker localization algorithms, based merely on the received
microphone signals, are often sensitive to adverse conditions, such as: high
reverberation or low signal to noise ratio (SNR). In some scenarios, e.g. in
meeting rooms or cars, it can be assumed that the source position is confined
to a predefined area, and the acoustic parameters of the environment are
approximately fixed. Such scenarios give rise to the assumption that the
acoustic samples from the region of interest have a distinct geometrical
structure. In this paper, we show that the high dimensional acoustic samples
indeed lie on a low dimensional manifold and can be embedded into a low
dimensional space. Motivated by this result, we propose a semi-supervised
source localization algorithm which recovers the inverse mapping between the
acoustic samples and their corresponding locations. The idea is to use an
optimization framework based on manifold regularization, that involves
smoothness constraints of possible solutions with respect to the manifold. The
proposed algorithm, termed Manifold Regularization for Localization (MRL), is
implemented in an adaptive manner. The initialization is conducted with only
few labelled samples attached with their respective source locations, and then
the system is gradually adapted as new unlabelled samples (with unknown source
locations) are received. Experimental results show superior localization
performance when compared with a recently presented algorithm based on a
manifold learning approach and with the generalized cross-correlation (GCC)
algorithm as a baseline
How Many Pairwise Preferences Do We Need to Rank A Graph Consistently?
We consider the problem of optimal recovery of true ranking of items from
a randomly chosen subset of their pairwise preferences. It is well known that
without any further assumption, one requires a sample size of for
the purpose. We analyze the problem with an additional structure of relational
graph over the items added with an assumption of
\emph{locality}: Neighboring items are similar in their rankings. Noting the
preferential nature of the data, we choose to embed not the graph, but, its
\emph{strong product} to capture the pairwise node relationships. Furthermore,
unlike existing literature that uses Laplacian embedding for graph based
learning problems, we use a richer class of graph
embeddings---\emph{orthonormal representations}---that includes (normalized)
Laplacian as its special case. Our proposed algorithm, {\it Pref-Rank},
predicts the underlying ranking using an SVM based approach over the chosen
embedding of the product graph, and is the first to provide \emph{statistical
consistency} on two ranking losses: \emph{Kendall's tau} and \emph{Spearman's
footrule}, with a required sample complexity of pairs, being the \emph{chromatic
number} of the complement graph . Clearly, our sample complexity is
smaller for dense graphs, with characterizing the degree of node
connectivity, which is also intuitive due to the locality assumption e.g.
for union of -cliques, or for random
and power law graphs etc.---a quantity much smaller than the fundamental limit
of for large . This, for the first time, relates ranking
complexity to structural properties of the graph. We also report experimental
evaluations on different synthetic and real datasets, where our algorithm is
shown to outperform the state-of-the-art methods.Comment: In Thirty-Third AAAI Conference on Artificial Intelligence, 201
Safe Exploration for Optimization with Gaussian Processes
We consider sequential decision problems under uncertainty, where we seek to optimize an unknown function from noisy samples. This requires balancing exploration (learning about the objective) and exploitation (localizing the maximum), a problem well-studied in the multi-armed bandit literature. In many applications, however, we require that the sampled function values exceed some prespecified "safety" threshold, a requirement that existing algorithms fail to meet. Examples include medical applications where patient comfort must be guaranteed, recommender systems aiming to avoid user dissatisfaction, and robotic control, where one seeks to avoid controls causing physical harm to the platform. We tackle this novel, yet rich, set of problems under the assumption that the unknown function satisfies regularity conditions expressed via a Gaussian process prior. We develop an efficient algorithm called SafeOpt, and theoretically guarantee its convergence to a natural notion of optimum reachable under safety constraints. We evaluate SafeOpt on synthetic data, as well as two real applications: movie recommendation, and therapeutic spinal cord stimulation
- …