12,550 research outputs found

    Eigenvector Synchronization, Graph Rigidity and the Molecule Problem

    Full text link
    The graph realization problem has received a great deal of attention in recent years, due to its importance in applications such as wireless sensor networks and structural biology. In this paper, we extend on previous work and propose the 3D-ASAP algorithm, for the graph realization problem in R3\mathbb{R}^3, given a sparse and noisy set of distance measurements. 3D-ASAP is a divide and conquer, non-incremental and non-iterative algorithm, which integrates local distance information into a global structure determination. Our approach starts with identifying, for every node, a subgraph of its 1-hop neighborhood graph, which can be accurately embedded in its own coordinate system. In the noise-free case, the computed coordinates of the sensors in each patch must agree with their global positioning up to some unknown rigid motion, that is, up to translation, rotation and possibly reflection. In other words, to every patch there corresponds an element of the Euclidean group Euc(3) of rigid transformations in R3\mathbb{R}^3, and the goal is to estimate the group elements that will properly align all the patches in a globally consistent way. Furthermore, 3D-ASAP successfully incorporates information specific to the molecule problem in structural biology, in particular information on known substructures and their orientation. In addition, we also propose 3D-SP-ASAP, a faster version of 3D-ASAP, which uses a spectral partitioning algorithm as a preprocessing step for dividing the initial graph into smaller subgraphs. Our extensive numerical simulations show that 3D-ASAP and 3D-SP-ASAP are very robust to high levels of noise in the measured distances and to sparse connectivity in the measurement graph, and compare favorably to similar state-of-the art localization algorithms.Comment: 49 pages, 8 figure

    Distributed Maximum Likelihood Sensor Network Localization

    Full text link
    We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements. We derive a computational efficient edge-based version of this ML convex relaxation class and we design a distributed algorithm that enables the sensor nodes to solve these edge-based convex programs locally by communicating only with their close neighbors. This algorithm relies on the alternating direction method of multipliers (ADMM), it converges to the centralized solution, it can run asynchronously, and it is computation error-resilient. Finally, we compare our proposed distributed scheme with other available methods, both analytically and numerically, and we argue the added value of ADMM, especially for large-scale networks

    Gain control of saccadic eye movements is probabilistic

    Get PDF
    Saccades are rapid eye movements that orient the visual axis toward objects of interest to allow their processing by the central, highacuity retina. Our ability to collect visual information efficiently relies on saccadic accuracy, which is limited by a combination of uncertainty in the location of the target and motor noise. It has been observed that saccades have a systematic tendency to fall short of their intended targets, and it has been suggested that this bias originates from a cost function that overly penalizes hypermetric errors. Here we tested this hypothesis by systematically manipulating the positional uncertainty of saccadic targets. We found that increasing uncertainty produced not only a larger spread of the saccadic endpoints but also more hypometric errors and a systematic bias toward the average of target locations in a given block, revealing that prior knowledge was integrated into saccadic planning. Moreover, by examining how variability and bias co-varied across conditions, we estimated the asymmetry of the cost function and found that it was related to individual differences in the additional time needed to program secondary saccades for correcting hypermetric errors, relative to hypometric ones. Taken together, these findings reveal that the saccadic system uses a probabilistic-Bayesian control strategy to compensate for uncertainty in a statistically principled way and to minimize the expected cost of saccadic errors
    • …
    corecore