5,005 research outputs found

    On the beneficial effect of noise in vertex localization

    Get PDF
    A theoretical and experimental analysis related to the effect of noise in the task of vertex identication in unknown shapes is presented. Shapes are seen as real functions of their closed boundary. An alternative global perspective of curvature is examined providing insight into the process of noise- enabled vertex localization. The analysis reveals that noise facilitates in the localization of certain vertices. The concept of noising is thus considered and a relevant global method for localizing Global Vertices is investigated in relation to local methods under the presence of increasing noise. Theoretical analysis reveals that induced noise can indeed help localizing certain vertices if combined with global descriptors. Experiments with noise and a comparison to localized methods validate the theoretical results

    Learning parametric dictionaries for graph signals

    Get PDF
    In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable properties -- the ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. The learning algorithm adapts the patterns to a training set of graph signals. Experimental results on both synthetic and real datasets demonstrate that the dictionaries learned by the proposed algorithm are competitive with and often better than unstructured dictionaries learned by state-of-the-art numerical learning algorithms in terms of sparse approximation of graph signals. In contrast to the unstructured dictionaries, however, the dictionaries learned by the proposed algorithm feature localized atoms and can be implemented in a computationally efficient manner in signal processing tasks such as compression, denoising, and classification

    Rightsizing LISA

    Get PDF
    The LISA science requirements and conceptual design have been fairly stable for over a decade. In the interest of reducing costs, the LISA Project at NASA has looked for simplifications of the architecture, at downsizing of subsystems, and at descopes of the entire mission. This is a natural activity of the formulation phase, and one that is particularly timely in the current NASA budgetary context. There is, and will continue to be, enormous pressure for cost reduction from both ESA and NASA, reviewers and the broader research community. Here, the rationale for the baseline architecture is reviewed, and recent efforts to find simplifications and other reductions that might lead to savings are reported. A few possible simplifications have been found in the LISA baseline architecture. In the interest of exploring cost sensitivity, one moderate and one aggressive descope have been evaluated; the cost savings are modest and the loss of science is not.Comment: To be published in Classical and Quantum Gravity; Proceedings of the Seventh International LISA Symposium, Barcelona, Spain, 16-20 Jun. 2008; 10 pages, 1 figure, 3 table

    Cooperative Synchronization in Wireless Networks

    Full text link
    Synchronization is a key functionality in wireless network, enabling a wide variety of services. We consider a Bayesian inference framework whereby network nodes can achieve phase and skew synchronization in a fully distributed way. In particular, under the assumption of Gaussian measurement noise, we derive two message passing methods (belief propagation and mean field), analyze their convergence behavior, and perform a qualitative and quantitative comparison with a number of competing algorithms. We also show that both methods can be applied in networks with and without master nodes. Our performance results are complemented by, and compared with, the relevant Bayesian Cram\'er-Rao bounds
    corecore