759 research outputs found
Distributed Maximum Likelihood Sensor Network Localization
We propose a class of convex relaxations to solve the sensor network
localization problem, based on a maximum likelihood (ML) formulation. This
class, as well as the tightness of the relaxations, depends on the noise
probability density function (PDF) of the collected measurements. We derive a
computational efficient edge-based version of this ML convex relaxation class
and we design a distributed algorithm that enables the sensor nodes to solve
these edge-based convex programs locally by communicating only with their close
neighbors. This algorithm relies on the alternating direction method of
multipliers (ADMM), it converges to the centralized solution, it can run
asynchronously, and it is computation error-resilient. Finally, we compare our
proposed distributed scheme with other available methods, both analytically and
numerically, and we argue the added value of ADMM, especially for large-scale
networks
Lifted Regression/Reconstruction Networks
In this work we propose lifted regression/reconstruction networks(LRRNs), which combine lifted neural networks with a guaranteed Lipschitz continuity property for the output layer. Lifted neural networks explicitly optimize an energy model to infer the unit activations and therefore—in contrast to standard feed-forward neural networks—allow bidirectional feedback between layers. So far lifted neural networks have been modelled around standard feed-forward architectures. We propose to take further advantage of the feedback property by letting the layers simultaneously perform regression and reconstruction. The resulting lifted network architecture allows to control the desired amount of Lipschitz continuity, which is an important feature to obtain adversarially robust regression and classification methods. We analyse and numerically demonstrate applications for unsupervised and supervised learnin
Lifted Regression/Reconstruction Networks
In this work we propose lifted regression/reconstruction networks (LRRNs),
which combine lifted neural networks with a guaranteed Lipschitz continuity
property for the output layer. Lifted neural networks explicitly optimize an
energy model to infer the unit activations and therefore---in contrast to
standard feed-forward neural networks---allow bidirectional feedback between
layers. So far lifted neural networks have been modelled around standard
feed-forward architectures. We propose to take further advantage of the
feedback property by letting the layers simultaneously perform regression and
reconstruction. The resulting lifted network architecture allows to control the
desired amount of Lipschitz continuity, which is an important feature to obtain
adversarially robust regression and classification methods. We analyse and
numerically demonstrate applications for unsupervised and supervised learning.Comment: 12 pages, 8 figure
MINA: {C}onvex Mixed-Integer Programming for Non-Rigid Shape Alignment
We present a convex mixed-integer programming formulation for non-rigid shape matching. To this end, we propose a novel shape deformation model based on an efficient low-dimensional discrete model, so that finding a globally optimal solution is tractable in (most) practical cases. Our approach combines several favourable properties: it is independent of the initialisation, it is much more efficient to solve to global optimality compared to analogous quadratic assignment problem formulations, and it is highly flexible in terms of the variants of matching problems it can handle. Experimentally we demonstrate that our approach outperforms existing methods for sparse shape matching, that it can be used for initialising dense shape matching methods, and we showcase its flexibility on several examples
- …