151 research outputs found

    Structured low-rank methods for robust 3D multi-shot EPI

    Get PDF
    Magnetic resonance imaging (MRI) has inherently slow acquisition speed, and Echo-Planar Imaging (EPI), as an efficient acquisition scheme, has been widely used in functional magnetic resonance imaging (fMRI) where an image series with high temporal resolution is needed to measure neuronal activity. Recently, 3D multi-shot EPI which samples data from an entire 3D volume with repeated shots has been drawing growing interest for fMRI with its high isotropic spatial resolution, particularly at ultra-high fields. However, compared to single-shot EPI, multi-shot EPI is sensitive to any inter-shot instabilities, e.g., subject movement and even physiologically induced field fluctuations. These inter-shot inconsistencies can greatly negate the theoretical benefits of 3D multi-shot EPI over conventional 2D multi-slice acquisitions. Structured low-rank image reconstruction which regularises under-sampled image reconstruction by exploiting the linear dependencies in MRI data has been successfully demonstrated in a variety of applications. In this thesis, a structured low-rank reconstruction method is optimised for 3D multi-shot EPI imaging together with a dedicated sampling pattern termed seg-CAIPI, in order to enhance the robustness to physiological fluctuations and improve the temporal stability of 3D multi-shot EPI for fMRI at 7T. Moreover, a motion compensated structured low-rank reconstruction framework is also presented for robust 3D multi-shot EPI which further takes into account inter-shot instabilities due to bulk motion. Lastly, this thesis also investigates into the improvement of structured low-rank reconstruction from an algorithmic perspective and presents the locally structured low-rank reconstruction scheme

    Generative Modeling in Structural-Hankel Domain for Color Image Inpainting

    Full text link
    In recent years, some researchers focused on using a single image to obtain a large number of samples through multi-scale features. This study intends to a brand-new idea that requires only ten or even fewer samples to construct the low-rank structural-Hankel matrices-assisted score-based generative model (SHGM) for color image inpainting task. During the prior learning process, a certain amount of internal-middle patches are firstly extracted from several images and then the structural-Hankel matrices are constructed from these patches. To better apply the score-based generative model to learn the internal statistical distribution within patches, the large-scale Hankel matrices are finally folded into the higher dimensional tensors for prior learning. During the iterative inpainting process, SHGM views the inpainting problem as a conditional generation procedure in low-rank environment. As a result, the intermediate restored image is acquired by alternatively performing the stochastic differential equation solver, alternating direction method of multipliers, and data consistency steps. Experimental results demonstrated the remarkable performance and diversity of SHGM.Comment: 11 pages, 10 figure

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    • …
    corecore