14 research outputs found

    Tensor Self-Organizing Map

    Get PDF
    九州工業大学博士学位論文 学位記番号:生工博甲第272号 学位授与年月日:平成28年6月30日第1章 序論|第2章 基礎知識|第3章 Tensor SOM:TSOM|第4章 人工データを用いたTSOMの検証|第5章 TSOMによるアンケートデータ解析|第6章 TSOMのバリエーション|第7章 討論|第8章 総括九州工業大学平成28年

    Kurtosis analysis of neural diffusion organization

    Get PDF
    A computational framework is presented for relating the kurtosis tensor for water diffusion in brain to tissue models of brain microstructure. The tissue models are assumed to be comprised of non-exchanging compartments that may be associated with various microstructural spaces separated by cell membranes. Within each compartment the water diffusion is regarded as Gaussian, although the diffusion for the full system would typically be non-Gaussian. The model parameters are determined so as to minimize the Frobenius norm of the difference between the measured kurtosis tensor and the model kurtosis tensor. This framework, referred to as kurtosis analysis of neural diffusion organization (KANDO), may be used to help provide a biophysical interpretation to the information provided by the kurtosis tensor. In addition, KANDO combined with diffusional kurtosis imaging can furnish a practical approach for developing candidate biomarkers for neuropathologies that involve alterations in tissue microstructure. KANDO is illustrated for simple tissue models of white and gray matter using data obtained from healthy human subjects.postprin

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    corecore