17,522 research outputs found
Noise Variance Estimation In Signal Processing
We present a new method of estimating noise
variance. The method is applicable for 1D and 2D signal
processing. The essence of this method is estimation of the scatter
of normally distributed data with high level of outliers. The
method is applicable to data with the majority of the data points
having no signal present. The method is based on the shortest
half sample method. The mean of the shortest half sample
(shorth) and the location of the least median of squares are
among the most robust measures of the location of the mode. The
length of the shortest half sample has been used as the
measurement of the data scatter of uncontaminated data. We
show that computing the length of several sub samples of varying
sizes provides the necessary information to estimate both the
scatter and the number of uncontaminated data points in a
sample. We derive the system of equations to solve for the data
scatter and the number of uncontaminated data points for the
Gaussian distribution. The data scatter is the measure of the
noise variance. The method can be extended to other
distributions
Simultaneous Codeword Optimization (SimCO) for Dictionary Update and Learning
We consider the data-driven dictionary learning problem. The goal is to seek
an over-complete dictionary from which every training signal can be best
approximated by a linear combination of only a few codewords. This task is
often achieved by iteratively executing two operations: sparse coding and
dictionary update. In the literature, there are two benchmark mechanisms to
update a dictionary. The first approach, such as the MOD algorithm, is
characterized by searching for the optimal codewords while fixing the sparse
coefficients. In the second approach, represented by the K-SVD method, one
codeword and the related sparse coefficients are simultaneously updated while
all other codewords and coefficients remain unchanged. We propose a novel
framework that generalizes the aforementioned two methods. The unique feature
of our approach is that one can update an arbitrary set of codewords and the
corresponding sparse coefficients simultaneously: when sparse coefficients are
fixed, the underlying optimization problem is similar to that in the MOD
algorithm; when only one codeword is selected for update, it can be proved that
the proposed algorithm is equivalent to the K-SVD method; and more importantly,
our method allows us to update all codewords and all sparse coefficients
simultaneously, hence the term simultaneous codeword optimization (SimCO).
Under the proposed framework, we design two algorithms, namely, primitive and
regularized SimCO. We implement these two algorithms based on a simple gradient
descent mechanism. Simulations are provided to demonstrate the performance of
the proposed algorithms, as compared with two baseline algorithms MOD and
K-SVD. Results show that regularized SimCO is particularly appealing in terms
of both learning performance and running speed.Comment: 13 page
Pyroomacoustics: A Python package for audio room simulations and array processing algorithms
We present pyroomacoustics, a software package aimed at the rapid development
and testing of audio array processing algorithms. The content of the package
can be divided into three main components: an intuitive Python object-oriented
interface to quickly construct different simulation scenarios involving
multiple sound sources and microphones in 2D and 3D rooms; a fast C
implementation of the image source model for general polyhedral rooms to
efficiently generate room impulse responses and simulate the propagation
between sources and receivers; and finally, reference implementations of
popular algorithms for beamforming, direction finding, and adaptive filtering.
Together, they form a package with the potential to speed up the time to market
of new algorithms by significantly reducing the implementation overhead in the
performance evaluation step.Comment: 5 pages, 5 figures, describes a software packag
Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes
Glucometers present an important self-monitoring tool for diabetes patients
and therefore must exhibit high accu- racy as well as good usability features.
Based on an invasive, photometric measurement principle that drastically
reduces the volume of the blood sample needed from the patient, we present a
framework that is capable of dealing with small blood samples, while
maintaining the required accuracy. The framework consists of two major parts:
1) image segmentation; and 2) convergence detection. Step 1) is based on
iterative mode-seeking methods to estimate the intensity value of the region of
interest. We present several variations of these methods and give theoretical
proofs of their convergence. Our approach is able to deal with changes in the
number and position of clusters without any prior knowledge. Furthermore, we
propose a method based on sparse approximation to decrease the computational
load, while maintaining accuracy. Step 2) is achieved by employing temporal
tracking and prediction, herewith decreasing the measurement time, and, thus,
improving usability. Our framework is validated on several real data sets with
different characteristics. We show that we are able to estimate the underlying
glucose concentration from much smaller blood samples than is currently
state-of-the- art with sufficient accuracy according to the most recent ISO
standards and reduce measurement time significantly compared to
state-of-the-art methods
Source localization and denoising: a perspective from the TDOA space
In this manuscript, we formulate the problem of denoising Time Differences of
Arrival (TDOAs) in the TDOA space, i.e. the Euclidean space spanned by TDOA
measurements. The method consists of pre-processing the TDOAs with the purpose
of reducing the measurement noise. The complete set of TDOAs (i.e., TDOAs
computed at all microphone pairs) is known to form a redundant set, which lies
on a linear subspace in the TDOA space. Noise, however, prevents TDOAs from
lying exactly on this subspace. We therefore show that TDOA denoising can be
seen as a projection operation that suppresses the component of the noise that
is orthogonal to that linear subspace. We then generalize the projection
operator also to the cases where the set of TDOAs is incomplete. We
analytically show that this operator improves the localization accuracy, and we
further confirm that via simulation.Comment: 25 pages, 9 figure
Robust correlated and individual component analysis
© 1979-2012 IEEE.Recovering correlated and individual components of two, possibly temporally misaligned, sets of data is a fundamental task in disciplines such as image, vision, and behavior computing, with application to problems such as multi-modal fusion (via correlated components), predictive analysis, and clustering (via the individual ones). Here, we study the extraction of correlated and individual components under real-world conditions, namely i) the presence of gross non-Gaussian noise and ii) temporally misaligned data. In this light, we propose a method for the Robust Correlated and Individual Component Analysis (RCICA) of two sets of data in the presence of gross, sparse errors. We furthermore extend RCICA in order to handle temporal incongruities arising in the data. To this end, two suitable optimization problems are solved. The generality of the proposed methods is demonstrated by applying them onto 4 applications, namely i) heterogeneous face recognition, ii) multi-modal feature fusion for human behavior analysis (i.e., audio-visual prediction of interest and conflict), iii) face clustering, and iv) thetemporal alignment of facial expressions. Experimental results on 2 synthetic and 7 real world datasets indicate the robustness and effectiveness of the proposed methodson these application domains, outperforming other state-of-the-art methods in the field
A Unified Framework for Sparse Non-Negative Least Squares using Multiplicative Updates and the Non-Negative Matrix Factorization Problem
We study the sparse non-negative least squares (S-NNLS) problem. S-NNLS
occurs naturally in a wide variety of applications where an unknown,
non-negative quantity must be recovered from linear measurements. We present a
unified framework for S-NNLS based on a rectified power exponential scale
mixture prior on the sparse codes. We show that the proposed framework
encompasses a large class of S-NNLS algorithms and provide a computationally
efficient inference procedure based on multiplicative update rules. Such update
rules are convenient for solving large sets of S-NNLS problems simultaneously,
which is required in contexts like sparse non-negative matrix factorization
(S-NMF). We provide theoretical justification for the proposed approach by
showing that the local minima of the objective function being optimized are
sparse and the S-NNLS algorithms presented are guaranteed to converge to a set
of stationary points of the objective function. We then extend our framework to
S-NMF, showing that our framework leads to many well known S-NMF algorithms
under specific choices of prior and providing a guarantee that a popular
subclass of the proposed algorithms converges to a set of stationary points of
the objective function. Finally, we study the performance of the proposed
approaches on synthetic and real-world data.Comment: To appear in Signal Processin
- …