4,621 research outputs found

    Orthogonally Constrained Sparse Approximations with Applications to Geometry Processing

    Get PDF
    Compressed manifold modes are solutions to an optimisation problem involving the ℓ1\ell_1 norm and the orthogonality condition XTMX=IX^TMX=I. Such functions can be used in geometry processing as a basis for the function space of a mesh and are related to the Laplacian eigenfunctions. Compressed manifold modes and other alternatives to the Laplacian eigenfunctions are all special cases of generalised manifold harmonics, introduced here as solutions to a more general problem. An important property of the Laplacian eigenfunctions is that they commute with isometry. A definition for isometry between meshes is given and it is proved that compressed manifold modes also commute with isometry. The requirements for generalised manifold harmonics to commute with isometry are explored. A variety of alternative basis functions are tested for their ability to reconstruct specific functions -- it is observed that the function type has more impact than the basis type. The bases are also tested for their ability to reconstruct functions transformed by functional map -- it is observed that some bases work better for different shape collections. The Stiefel manifold is given by the set of matrices X∈Rn×kX \in \mathbb{R}^{n \times k} such that XTMX=IX^TMX = I, with M=IM=I. Properties and results are generalised for the M≠IM \neq I case. A sequential algorithm for optimisation on the generalised Stiefel manifold is given and applied to the calculation of compressed manifold modes. This involves a smoothing of the ℓ1\ell_1 norm. Laplacian eigenfunctions can be approximated by solving an eigenproblem restricted to a subspace. It is proved that these restricted eigenfunctions also commute with isometry. Finally, a method for the approximation of compressed manifold modes is given. This combines the method of fast approximation of Laplacian eigenfunctions with the ADMM solution to the compressed manifold mode problem. A significant improvement is made to the speed of calculation

    Localized Manifold Harmonics for Spectral Shape Analysis

    Get PDF
    The use of Laplacian eigenfunctions is ubiquitous in a wide range of computer graphics and geometry processing applications. In particular, Laplacian eigenbases allow generalizing the classical Fourier analysis to manifolds. A key drawback of such bases is their inherently global nature, as the Laplacian eigenfunctions carry geometric and topological structure of the entire manifold. In this paper, we introduce a new framework for local spectral shape analysis. We show how to efficiently construct localized orthogonal bases by solving an optimization problem that in turn can be posed as the eigendecomposition of a new operator obtained by a modification of the standard Laplacian. We study the theoretical and computational aspects of the proposed framework and showcase our new construction on the classical problems of shape approximation and correspondence. We obtain significant improvement compared to classical Laplacian eigenbases as well as other alternatives for constructing localized bases

    Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices

    Full text link
    A recent trend in DNN development is to extend the reach of deep learning applications to platforms that are more resource and energy constrained, e.g., mobile devices. These endeavors aim to reduce the DNN model size and improve the hardware processing efficiency, and have resulted in DNNs that are much more compact in their structures and/or have high data sparsity. These compact or sparse models are different from the traditional large ones in that there is much more variation in their layer shapes and sizes, and often require specialized hardware to exploit sparsity for performance improvement. Thus, many DNN accelerators designed for large DNNs do not perform well on these models. In this work, we present Eyeriss v2, a DNN accelerator architecture designed for running compact and sparse DNNs. To deal with the widely varying layer shapes and sizes, it introduces a highly flexible on-chip network, called hierarchical mesh, that can adapt to the different amounts of data reuse and bandwidth requirements of different data types, which improves the utilization of the computation resources. Furthermore, Eyeriss v2 can process sparse data directly in the compressed domain for both weights and activations, and therefore is able to improve both processing speed and energy efficiency with sparse models. Overall, with sparse MobileNet, Eyeriss v2 in a 65nm CMOS process achieves a throughput of 1470.6 inferences/sec and 2560.3 inferences/J at a batch size of 1, which is 12.6x faster and 2.5x more energy efficient than the original Eyeriss running MobileNet. We also present an analysis methodology called Eyexam that provides a systematic way of understanding the performance limits for DNN processors as a function of specific characteristics of the DNN model and accelerator design; it applies these characteristics as sequential steps to increasingly tighten the bound on the performance limits.Comment: accepted for publication in IEEE Journal on Emerging and Selected Topics in Circuits and Systems. This extended version on arXiv also includes Eyexam in the appendi

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure

    Linear Shape Deformation Models with Local Support Using Graph-based Structured Matrix Factorisation

    Get PDF
    Representing 3D shape deformations by linear models in high-dimensional space has many applications in computer vision and medical imaging, such as shape-based interpolation or segmentation. Commonly, using Principal Components Analysis a low-dimensional (affine) subspace of the high-dimensional shape space is determined. However, the resulting factors (the most dominant eigenvectors of the covariance matrix) have global support, i.e. changing the coefficient of a single factor deforms the entire shape. In this paper, a method to obtain deformation factors with local support is presented. The benefits of such models include better flexibility and interpretability as well as the possibility of interactively deforming shapes locally. For that, based on a well-grounded theoretical motivation, we formulate a matrix factorisation problem employing sparsity and graph-based regularisation terms. We demonstrate that for brain shapes our method outperforms the state of the art in local support models with respect to generalisation ability and sparse shape reconstruction, whereas for human body shapes our method gives more realistic deformations.Comment: Please cite CVPR 2016 versio
    • …
    corecore