9 research outputs found

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    Técnicas para la segmentación y visualización eficiente de imagen médica 3D: explotando la arquitectura de la GPU

    Get PDF
    El objetivo del trabajo realizado en esta Tesis de Doctorado es proponer soluciones eficientes para el procesado de imagen médica en tarjetas gráficas (GPU). En particular, el trabajo se ha centrado en las tareas de segmentación y visualización. Ambas tareas son bastante amplias, y en la literatura es posible encontrar multitud de soluciones diferentes para cada una de ellas. Es por esto que hemos seleccionado una serie de algoritmos cuya efectividad ya está demostrada y hemos aplicado diversas de técnicas para implementarlos en GPU tratando de maximizar el rendimiento

    TAMRESH: Tensor Approximation Multiresolution Hierarchy for Interactive Volume Visualization

    Full text link
    Interactive visual analysis of large and complex volume datasets is an ongoing and challenging problem. We tackle this challenge in the context of state-of-the-art out-of-core multiresolution volume rendering by introducing a novel hierarchical tensor approximation (TA) volume visualization approach. The TA framework allows us (a) to use a rank-truncated basis for compact volume representation, (b) to visualize features at multiple scales, and (c) to visualize the data at multiple resolutions. In this paper, we exploit the special properties of the TA factor matrix bases and define a novel multiscale and multiresolution volume rendering hierarchy. Different from previous approaches, to represent one volume dataset we use but one set of global bases (TA factor matrices) to reconstruct at all resolution levels and feature scales. In particular, we propose a coupling of multiscalable feature visualization and multiresolution DVR through the properties of global TA bases. We demonstrate our novel TA multiresolution hierarchy based volume representation and visualization on a number of mCT volume datasets

    Improving Efficiency for CUDA-based Volume Rendering by Combining Segmentation and Modified Sampling Strategies

    Get PDF
    The objective of this paper is to present a speed-up method to improve the rendering speed of ray casting at the same time obtaining high-quality images. Ray casting is the most commonly used volume rendering algorithm, and suitable for parallel processing. In order to improve the efficiency of parallel processing, the latest platform-Compute Unified Device Architecture (CUDA) is used. The speed-up method uses improved workload allocation and sampling strategies according to CUDA features. To implement this method, the optimal number of segments of each ray is dynamically selected based on the change of the corresponding visual angle, and each segment is processed by a distinct thread processor. In addition, for each segment, we apply different sampling quantity and density according to the distance weight. Rendering speed results show that our method achieves an average 70% improvement in terms of speed, and even 145% increase in some special cases, compared to conventional ray casting on Graphics Processing Unit (GPU). Speed-up ratio shows that this method can effectively improve the factors that influence efficiency of rendering. Excellent rendering performance makes this method contribute to real-time 3-D reconstruction
    corecore