2,110 research outputs found
Semi-Supervised Sound Source Localization Based on Manifold Regularization
Conventional speaker localization algorithms, based merely on the received
microphone signals, are often sensitive to adverse conditions, such as: high
reverberation or low signal to noise ratio (SNR). In some scenarios, e.g. in
meeting rooms or cars, it can be assumed that the source position is confined
to a predefined area, and the acoustic parameters of the environment are
approximately fixed. Such scenarios give rise to the assumption that the
acoustic samples from the region of interest have a distinct geometrical
structure. In this paper, we show that the high dimensional acoustic samples
indeed lie on a low dimensional manifold and can be embedded into a low
dimensional space. Motivated by this result, we propose a semi-supervised
source localization algorithm which recovers the inverse mapping between the
acoustic samples and their corresponding locations. The idea is to use an
optimization framework based on manifold regularization, that involves
smoothness constraints of possible solutions with respect to the manifold. The
proposed algorithm, termed Manifold Regularization for Localization (MRL), is
implemented in an adaptive manner. The initialization is conducted with only
few labelled samples attached with their respective source locations, and then
the system is gradually adapted as new unlabelled samples (with unknown source
locations) are received. Experimental results show superior localization
performance when compared with a recently presented algorithm based on a
manifold learning approach and with the generalized cross-correlation (GCC)
algorithm as a baseline
Seeing into Darkness: Scotopic Visual Recognition
Images are formed by counting how many photons traveling from a given set of
directions hit an image sensor during a given time interval. When photons are
few and far in between, the concept of `image' breaks down and it is best to
consider directly the flow of photons. Computer vision in this regime, which we
call `scotopic', is radically different from the classical image-based paradigm
in that visual computations (classification, control, search) have to take
place while the stream of photons is captured and decisions may be taken as
soon as enough information is available. The scotopic regime is important for
biomedical imaging, security, astronomy and many other fields. Here we develop
a framework that allows a machine to classify objects with as few photons as
possible, while maintaining the error rate below an acceptable threshold. A
dynamic and asymptotically optimal speed-accuracy tradeoff is a key feature of
this framework. We propose and study an algorithm to optimize the tradeoff of a
convolutional network directly from lowlight images and evaluate on simulated
images from standard datasets. Surprisingly, scotopic systems can achieve
comparable classification performance as traditional vision systems while using
less than 0.1% of the photons in a conventional image. In addition, we
demonstrate that our algorithms work even when the illuminance of the
environment is unknown and varying. Last, we outline a spiking neural network
coupled with photon-counting sensors as a power-efficient hardware realization
of scotopic algorithms.Comment: 23 pages, 6 figure
- …