6,429 research outputs found
Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling
Solving linear regression problems based on the total least-squares (TLS)
criterion has well-documented merits in various applications, where
perturbations appear both in the data vector as well as in the regression
matrix. However, existing TLS approaches do not account for sparsity possibly
present in the unknown vector of regression coefficients. On the other hand,
sparsity is the key attribute exploited by modern compressive sampling and
variable selection approaches to linear regression, which include noise in the
data, but do not account for perturbations in the regression matrix. The
present paper fills this gap by formulating and solving TLS optimization
problems under sparsity constraints. Near-optimum and reduced-complexity
suboptimum sparse (S-) TLS algorithms are developed to address the perturbed
compressive sampling (and the related dictionary learning) challenge, when
there is a mismatch between the true and adopted bases over which the unknown
vector is sparse. The novel S-TLS schemes also allow for perturbations in the
regression matrix of the least-absolute selection and shrinkage selection
operator (Lasso), and endow TLS approaches with ability to cope with sparse,
under-determined "errors-in-variables" models. Interesting generalizations can
further exploit prior knowledge on the perturbations to obtain novel weighted
and structured S-TLS solvers. Analysis and simulations demonstrate the
practical impact of S-TLS in calibrating the mismatch effects of contemporary
grid-based approaches to cognitive radio sensing, and robust
direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal
Processin
Induction of Interpretable Possibilistic Logic Theories from Relational Data
The field of Statistical Relational Learning (SRL) is concerned with learning
probabilistic models from relational data. Learned SRL models are typically
represented using some kind of weighted logical formulas, which make them
considerably more interpretable than those obtained by e.g. neural networks. In
practice, however, these models are often still difficult to interpret
correctly, as they can contain many formulas that interact in non-trivial ways
and weights do not always have an intuitive meaning. To address this, we
propose a new SRL method which uses possibilistic logic to encode relational
models. Learned models are then essentially stratified classical theories,
which explicitly encode what can be derived with a given level of certainty.
Compared to Markov Logic Networks (MLNs), our method is faster and produces
considerably more interpretable models.Comment: Longer version of a paper appearing in IJCAI 201
Robust Near-Field 3D Localization of an Unaligned Single-Coil Agent Using Unobtrusive Anchors
The magnetic near-field provides a suitable means for indoor localization,
due to its insensitivity to the environment and strong spatial gradients. We
consider indoor localization setups consisting of flat coils, allowing for
convenient integration of the agent coil into a mobile device (e.g., a smart
phone or wristband) and flush mounting of the anchor coils to walls. In order
to study such setups systematically, we first express the Cram\'er-Rao lower
bound (CRLB) on the position error for unknown orientation and evaluate its
distribution within a square room of variable size, using 15 x 10cm anchor
coils and a commercial NFC antenna at the agent. Thereby, we find cm-accuracy
being achievable in a room of 10 x 10 x 3 meters with 12 flat wall-mounted
anchors and with 10mW used for the generation of magnetic fields. Practically
achieving such estimation performance is, however, difficult because of the
non-convex 5D likelihood function. To that end, we propose a fast and accurate
weighted least squares (WLS) algorithm which is insensitive to initialization.
This is enabled by effectively eliminating the orientation nuisance parameter
in a rigorous fashion and scaling the individual anchor observations, leading
to a smoothed 3D cost function. Using WLS estimates to initialize a
maximum-likelihood (ML) solver yields accuracy near the theoretical limit in up
to 98% of cases, thus enabling robust indoor localization with unobtrusive
infrastructure, with a computational efficiency suitable for real-time
processing.Comment: 7 pages, to be presented at IEEE PIMRC 201
Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data
Subsequence clustering of multivariate time series is a useful tool for
discovering repeated patterns in temporal data. Once these patterns have been
discovered, seemingly complicated datasets can be interpreted as a temporal
sequence of only a small number of states, or clusters. For example, raw sensor
data from a fitness-tracking application can be expressed as a timeline of a
select few actions (i.e., walking, sitting, running). However, discovering
these patterns is challenging because it requires simultaneous segmentation and
clustering of the time series. Furthermore, interpreting the resulting clusters
is difficult, especially when the data is high-dimensional. Here we propose a
new method of model-based clustering, which we call Toeplitz Inverse
Covariance-based Clustering (TICC). Each cluster in the TICC method is defined
by a correlation network, or Markov random field (MRF), characterizing the
interdependencies between different observations in a typical subsequence of
that cluster. Based on this graphical representation, TICC simultaneously
segments and clusters the time series data. We solve the TICC problem through
alternating minimization, using a variation of the expectation maximization
(EM) algorithm. We derive closed-form solutions to efficiently solve the two
resulting subproblems in a scalable way, through dynamic programming and the
alternating direction method of multipliers (ADMM), respectively. We validate
our approach by comparing TICC to several state-of-the-art baselines in a
series of synthetic experiments, and we then demonstrate on an automobile
sensor dataset how TICC can be used to learn interpretable clusters in
real-world scenarios.Comment: This revised version fixes two small typos in the published versio
Network Inference via the Time-Varying Graphical Lasso
Many important problems can be modeled as a system of interconnected
entities, where each entity is recording time-dependent observations or
measurements. In order to spot trends, detect anomalies, and interpret the
temporal dynamics of such data, it is essential to understand the relationships
between the different entities and how these relationships evolve over time. In
this paper, we introduce the time-varying graphical lasso (TVGL), a method of
inferring time-varying networks from raw time series data. We cast the problem
in terms of estimating a sparse time-varying inverse covariance matrix, which
reveals a dynamic network of interdependencies between the entities. Since
dynamic network inference is a computationally expensive task, we derive a
scalable message-passing algorithm based on the Alternating Direction Method of
Multipliers (ADMM) to solve this problem in an efficient way. We also discuss
several extensions, including a streaming algorithm to update the model and
incorporate new observations in real time. Finally, we evaluate our TVGL
algorithm on both real and synthetic datasets, obtaining interpretable results
and outperforming state-of-the-art baselines in terms of both accuracy and
scalability
A Geometric Approach to Sound Source Localization from Time-Delay Estimates
This paper addresses the problem of sound-source localization from time-delay
estimates using arbitrarily-shaped non-coplanar microphone arrays. A novel
geometric formulation is proposed, together with a thorough algebraic analysis
and a global optimization solver. The proposed model is thoroughly described
and evaluated. The geometric analysis, stemming from the direct acoustic
propagation model, leads to necessary and sufficient conditions for a set of
time delays to correspond to a unique position in the source space. Such sets
of time delays are referred to as feasible sets. We formally prove that every
feasible set corresponds to exactly one position in the source space, whose
value can be recovered using a closed-form localization mapping. Therefore we
seek for the optimal feasible set of time delays given, as input, the received
microphone signals. This time delay estimation problem is naturally cast into a
programming task, constrained by the feasibility conditions derived from the
geometric analysis. A global branch-and-bound optimization technique is
proposed to solve the problem at hand, hence estimating the best set of
feasible time delays and, subsequently, localizing the sound source. Extensive
experiments with both simulated and real data are reported; we compare our
methodology to four state-of-the-art techniques. This comparison clearly shows
that the proposed method combined with the branch-and-bound algorithm
outperforms existing methods. These in-depth geometric understanding, practical
algorithms, and encouraging results, open several opportunities for future
work.Comment: 13 pages, 2 figures, 3 table, journa
- …