4 research outputs found
Distributed, simple and stable network localization
We propose a simple, stable and distributed algorithm which directly
optimizes the nonconvex maximum likelihood criterion for sensor network
localization, with no need to tune any free parameter. We reformulate the
problem to obtain a gradient Lipschitz cost; by shifting to this cost function
we enable a Majorization-Minimization (MM) approach based on quadratic upper
bounds that decouple across nodes; the resulting algorithm happens to be
distributed, with all nodes working in parallel. Our method inherits the MM
stability: each communication cuts down the cost function. Numerical
simulations indicate that the proposed approach tops the performance of the
state of the art algorithm, both in accuracy and communication cost
DCOOL-NET: Distributed cooperative localization for sensor networks
We present DCOOL-NET, a scalable distributed in-network algorithm for sensor
network localization based on noisy range measurements. DCOOL-NET operates by
parallel, collaborative message passing between single-hop neighbor sensors,
and involves simple computations at each node. It stems from an application of
the majorization-minimization (MM) framework to the nonconvex optimization
problem at hand, and capitalizes on a novel convex majorizer. The proposed
majorizer is endowed with several desirable properties and represents a key
contribution of this work. It is a more accurate match to the underlying
nonconvex cost function than popular MM quadratic majorizers, and is readily
amenable to distributed minimization via the alternating direction method of
multipliers (ADMM). Moreover, it allows for low-complexity, fast Nesterov
gradient methods to tackle the ADMM subproblems induced at each node. Computer
simulations show that DCOOL-NET achieves comparable or better sensor position
accuracies than a state-of-art method which, furthermore, is not parallel
Simple and fast convex relaxation method for cooperative localization in sensor networks using range measurements
We address the sensor network localization problem given noisy range
measurements between pairs of nodes. We approach the non-convex
maximum-likelihood formulation via a known simple convex relaxation. We exploit
its favorable optimization properties to the full to obtain an approach that:
is completely distributed, has a simple implementation at each node, and
capitalizes on an optimal gradient method to attain fast convergence. We offer
a parallel but also an asynchronous flavor, both with theoretical convergence
guarantees and iteration complexity analysis. Experimental results establish
leading performance. Our algorithms top the accuracy of a comparable state of
the art method by one order of magnitude, using one order of magnitude fewer
communications
Intrinsic Isometric Manifold Learning with Application to Localization
Data living on manifolds commonly appear in many applications. Often this
results from an inherently latent low-dimensional system being observed through
higher dimensional measurements. We show that under certain conditions, it is
possible to construct an intrinsic and isometric data representation, which
respects an underlying latent intrinsic geometry. Namely, we view the observed
data only as a proxy and learn the structure of a latent unobserved intrinsic
manifold, whereas common practice is to learn the manifold of the observed
data. For this purpose, we build a new metric and propose a method for its
robust estimation by assuming mild statistical priors and by using artificial
neural networks as a mechanism for metric regularization and parametrization.
We show successful application to unsupervised indoor localization in ad-hoc
sensor networks. Specifically, we show that our proposed method facilitates
accurate localization of a moving agent from imaging data it collects.
Importantly, our method is applied in the same way to two different imaging
modalities, thereby demonstrating its intrinsic and modality-invariant
capabilities