19,941 research outputs found

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Statistical and spatial analysis of landslide susceptibility maps with different classification systems

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s12665-016-6124-1A landslide susceptibility map is an essential tool for land-use spatial planning and management in mountain areas. However, a classification system used for readability determines the final appearance of the map and may therefore influence the decision-making tasks adopted. The present paper addresses the spatial comparison and the accuracy assessment of some well-known classification methods applied to a susceptibility map that was based on a discriminant statistical model in an area in the Eastern Pyrenees. A number of statistical approaches (Spearman’s correlation, kappa index, factorial and cluster analyses and landslide density index) for map comparison were performed to quantify the information provided by the usual image analysis. The results showed the reliability and consistency of the kappa index against Spearman’s correlation as accuracy measures to assess the spatial agreement between maps. Inferential tests between unweighted and linear weighted kappa results showed that all the maps were more reliable in classifying areas of highest susceptibility and less reliable in classifying areas of low to moderate susceptibility. The spatial variability detected and quantified by factorial and cluster analyses showed that the maps classified by quantile and natural break methods were the closest whereas those classified by landslide percentage and equal interval methods displayed the greatest differences. The difference image analysis showed that the five classified maps only matched 9 % of the area. This area corresponded to the steeper slopes and the steeper watershed angle with forestless and sunny slopes at low altitudes. This means that the five maps coincide in identifying and classifying the most dangerous areas. The equal interval map overestimated the susceptibility of the study area, and the landslide percentage map was considered to be a very optimistic model. The spatial pattern of the quantile and natural break maps was very similar, but the latter was more consistent and predicted potential landslides more efficiently and reliably in the study area.Peer ReviewedPreprin

    Highly Efficient Regression for Scalable Person Re-Identification

    Full text link
    Existing person re-identification models are poor for scaling up to large data required in real-world applications due to: (1) Complexity: They employ complex models for optimal performance resulting in high computational cost for training at a large scale; (2) Inadaptability: Once trained, they are unsuitable for incremental update to incorporate any new data available. This work proposes a truly scalable solution to re-id by addressing both problems. Specifically, a Highly Efficient Regression (HER) model is formulated by embedding the Fisher's criterion to a ridge regression model for very fast re-id model learning with scalable memory/storage usage. Importantly, this new HER model supports faster than real-time incremental model updates therefore making real-time active learning feasible in re-id with human-in-the-loop. Extensive experiments show that such a simple and fast model not only outperforms notably the state-of-the-art re-id methods, but also is more scalable to large data with additional benefits to active learning for reducing human labelling effort in re-id deployment
    • …
    corecore