36,444 research outputs found

    Signals on Graphs: Uncertainty Principle and Sampling

    Full text link
    In many applications, the observations can be represented as a signal defined over the vertices of a graph. The analysis of such signals requires the extension of standard signal processing tools. In this work, first, we provide a class of graph signals that are maximally concentrated on the graph domain and on its dual. Then, building on this framework, we derive an uncertainty principle for graph signals and illustrate the conditions for the recovery of band-limited signals from a subset of samples. We show an interesting link between uncertainty principle and sampling and propose alternative signal recovery algorithms, including a generalization to frame-based reconstruction methods. After showing that the performance of signal recovery algorithms is significantly affected by the location of samples, we suggest and compare a few alternative sampling strategies. Finally, we provide the conditions for perfect recovery of a useful signal corrupted by sparse noise, showing that this problem is also intrinsically related to vertex-frequency localization properties.Comment: This article is the revised version submitted to the IEEE Transactions on Signal Processing on May, 2016; first revision was submitted on January, 2016; original manuscript was submitted on July, 2015. The work includes 16 pages, 8 figure

    Constructing minimum deflection fixture arrangements using frame invariant norms

    Get PDF
    This paper describes a fixture planning method that minimizes object deflection under external loads. The method takes into account the natural compliance of the contacting bodies and applies to two-dimensional and three-dimensional quasirigid bodies. The fixturing method is based on a quality measure that characterizes the deflection of a fixtured object in response to unit magnitude wrenches. The object deflection measure is defined in terms of frame-invariant rigid body velocity and wrench norms and is therefore frame invariant. The object deflection measure is applied to the planning of optimal fixture arrangements of polygonal objects. We describe minimum-deflection fixturing algorithms for these objects, and make qualitative observations on the optimal arrangements generated by the algorithms. Concrete examples illustrate the minimum deflection fixturing method. Note to Practitioners-During fixturing, a workpiece needs to not only be stable against external perturbations, but must also stay within a specified tolerance in response to machining or assembly forces. This paper describes a fixture planning approach that minimizes object deflection under applied work loads. The paper describes how to take local material deformation effects into account, using a generic quasirigid contact model. Practical algorithms that compute the optimal fixturing arrangements of polygonal workpieces are described and examples are then presented

    Dynamics of continuous-time quantum walks in restricted geometries

    Full text link
    We study quantum transport on finite discrete structures and we model the process by means of continuous-time quantum walks. A direct and effective comparison between quantum and classical walks can be attained based on the average displacement of the walker as a function of time. Indeed, a fast growth of the average displacement can be advantageously exploited to build up efficient search algorithms. By means of analytical and numerical investigations, we show that the finiteness and the inhomogeneity of the substrate jointly weaken the quantum walk performance. We further highlight the interplay between the quantum-walk dynamics and the underlying topology by studying the temporal evolution of the transfer probability distribution and the lower bound of long time averages.Comment: 25 pages, 13 figure

    A Spectral Graph Uncertainty Principle

    Full text link
    The spectral theory of graphs provides a bridge between classical signal processing and the nascent field of graph signal processing. In this paper, a spectral graph analogy to Heisenberg's celebrated uncertainty principle is developed. Just as the classical result provides a tradeoff between signal localization in time and frequency, this result provides a fundamental tradeoff between a signal's localization on a graph and in its spectral domain. Using the eigenvectors of the graph Laplacian as a surrogate Fourier basis, quantitative definitions of graph and spectral "spreads" are given, and a complete characterization of the feasibility region of these two quantities is developed. In particular, the lower boundary of the region, referred to as the uncertainty curve, is shown to be achieved by eigenvectors associated with the smallest eigenvalues of an affine family of matrices. The convexity of the uncertainty curve allows it to be found to within ε\varepsilon by a fast approximation algorithm requiring O(ε1/2)O(\varepsilon^{-1/2}) typically sparse eigenvalue evaluations. Closed-form expressions for the uncertainty curves for some special classes of graphs are derived, and an accurate analytical approximation for the expected uncertainty curve of Erd\H{o}s-R\'enyi random graphs is developed. These theoretical results are validated by numerical experiments, which also reveal an intriguing connection between diffusion processes on graphs and the uncertainty bounds.Comment: 40 pages, 8 figure

    Structural Variability from Noisy Tomographic Projections

    Full text link
    In cryo-electron microscopy, the 3D electric potentials of an ensemble of molecules are projected along arbitrary viewing directions to yield noisy 2D images. The volume maps representing these potentials typically exhibit a great deal of structural variability, which is described by their 3D covariance matrix. Typically, this covariance matrix is approximately low-rank and can be used to cluster the volumes or estimate the intrinsic geometry of the conformation space. We formulate the estimation of this covariance matrix as a linear inverse problem, yielding a consistent least-squares estimator. For nn images of size NN-by-NN pixels, we propose an algorithm for calculating this covariance estimator with computational complexity O(nN4+κN6logN)\mathcal{O}(nN^4+\sqrt{\kappa}N^6 \log N), where the condition number κ\kappa is empirically in the range 1010--200200. Its efficiency relies on the observation that the normal equations are equivalent to a deconvolution problem in 6D. This is then solved by the conjugate gradient method with an appropriate circulant preconditioner. The result is the first computationally efficient algorithm for consistent estimation of 3D covariance from noisy projections. It also compares favorably in runtime with respect to previously proposed non-consistent estimators. Motivated by the recent success of eigenvalue shrinkage procedures for high-dimensional covariance matrices, we introduce a shrinkage procedure that improves accuracy at lower signal-to-noise ratios. We evaluate our methods on simulated datasets and achieve classification results comparable to state-of-the-art methods in shorter running time. We also present results on clustering volumes in an experimental dataset, illustrating the power of the proposed algorithm for practical determination of structural variability.Comment: 52 pages, 11 figure
    corecore