646 research outputs found
Tensor-based regression models and applications
Tableau d’honneur de la Faculté des études supérieures et postdoctorales, 2017-2018Avec l’avancement des technologies modernes, les tenseurs d’ordre élevé sont assez répandus et abondent dans un large éventail d’applications telles que la neuroscience informatique, la vision par ordinateur, le traitement du signal et ainsi de suite. La principale raison pour laquelle les méthodes de régression classiques ne parviennent pas à traiter de façon appropriée des tenseurs d’ordre élevé est due au fait que ces données contiennent des informations structurelles multi-voies qui ne peuvent pas être capturées directement par les modèles conventionnels de régression vectorielle ou matricielle. En outre, la très grande dimensionnalité de l’entrée tensorielle produit une énorme quantité de paramètres, ce qui rompt les garanties théoriques des approches de régression classique. De plus, les modèles classiques de régression se sont avérés limités en termes de difficulté d’interprétation, de sensibilité au bruit et d’absence d’unicité. Pour faire face à ces défis, nous étudions une nouvelle classe de modèles de régression, appelés modèles de régression tensor-variable, où les prédicteurs indépendants et (ou) les réponses dépendantes prennent la forme de représentations tensorielles d’ordre élevé. Nous les appliquons également dans de nombreuses applications du monde réel pour vérifier leur efficacité et leur efficacité.With the advancement of modern technologies, high-order tensors are quite widespread and abound in a broad range of applications such as computational neuroscience, computer vision, signal processing and so on. The primary reason that classical regression methods fail to appropriately handle high-order tensors is due to the fact that those data contain multiway structural information which cannot be directly captured by the conventional vector-based or matrix-based regression models, causing substantial information loss during the regression. Furthermore, the ultrahigh dimensionality of tensorial input produces huge amount of parameters, which breaks the theoretical guarantees of classical regression approaches. Additionally, the classical regression models have also been shown to be limited in terms of difficulty of interpretation, sensitivity to noise and absence of uniqueness. To deal with these challenges, we investigate a novel class of regression models, called tensorvariate regression models, where the independent predictors and (or) dependent responses take the form of high-order tensorial representations. We also apply them in numerous real-world applications to verify their efficiency and effectiveness. Concretely, we first introduce hierarchical Tucker tensor regression, a generalized linear tensor regression model that is able to handle potentially much higher order tensor input. Then, we work on online local Gaussian process for tensor-variate regression, an efficient nonlinear GPbased approach that can process large data sets at constant time in a sequential way. Next, we present a computationally efficient online tensor regression algorithm with general tensorial input and output, called incremental higher-order partial least squares, for the setting of infinite time-dependent tensor streams. Thereafter, we propose a super-fast sequential tensor regression framework for general tensor sequences, namely recursive higher-order partial least squares, which addresses issues of limited storage space and fast processing time allowed by dynamic environments. Finally, we introduce kernel-based multiblock tensor partial least squares, a new generalized nonlinear framework that is capable of predicting a set of tensor blocks by merging a set of tensor blocks from different sources with a boosted predictive power
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Recommended from our members
Tensor Analysis and the Dynamics of Motor Cortex
Neural data often span multiple indices, such as neuron, experimental condition, trial, and time, resulting in a tensor or multidimensional array. Standard approaches to neural data analysis often rely on matrix factorization techniques, such as principal component analysis or nonnegative matrix factorization. Any inherent tensor structure in the data is lost when flattened into a matrix. Here, we analyze datasets from primary motor cortex from the perspective of tensor analysis, and develop a theory for how tensor structure relates to certain computational properties of the underlying system. Applied to the motor cortex datasets, we reveal that neural activity is best described by condition-independent dynamics as opposed to condition-dependent relations to external movement variables. Motivated by this result, we pursue one further tensor-related analysis, and two further dynamical systems-related analyses. First, we show how tensor decompositions can be used to denoise neural signals. Second, we apply system identification to the cortex- to-muscle transformation to reveal the intermediate spinal dynamics. Third, we fit recurrent neural networks to muscle activations and show that the geometric properties observed in motor cortex are naturally recapitulated in the network model. Taken together, these results emphasize (on the data analysis side) the role of tensor structure in data and (on the theoretical side) the role of motor cortex as a dynamical system
Decoding ECoG signal into 3D hand translation using deep learning
Motor brain-computer interfaces (BCIs) are a promising technology that may
enable motor-impaired people to interact with their environment. Designing
real-time and accurate BCI is crucial to make such devices useful, safe, and
easy to use by patients in a real-life environment. Electrocorticography
(ECoG)-based BCIs emerge as a good compromise between invasiveness of the
recording device and good spatial and temporal resolution of the recorded
signal. However, most ECoG signal decoders used to predict continuous hand
movements are linear models. These models have a limited representational
capacity and may fail to capture the relationship between ECoG signal and
continuous hand movements. Deep learning (DL) models, which are
state-of-the-art in many problems, could be a solution to better capture this
relationship. In this study, we tested several DL-based architectures to
predict imagined 3D continuous hand translation using time-frequency features
extracted from ECoG signals. The dataset used in the analysis is a part of a
long-term clinical trial (ClinicalTrials.gov identifier: NCT02550522) and was
acquired during a closed-loop experiment with a tetraplegic subject. The
proposed architectures include multilayer perceptron (MLP), convolutional
neural networks (CNN), and long short-term memory networks (LSTM). The accuracy
of the DL-based and multilinear models was compared offline using cosine
similarity. Our results show that CNN-based architectures outperform the
current state-of-the-art multilinear model. The best architecture exploited the
spatial correlation between neighboring electrodes with CNN and benefited from
the sequential character of the desired hand trajectory by using LSTMs.
Overall, DL increased the average cosine similarity, compared to the
multilinear model, by up to 60%, from 0.189 to 0.302 and from 0.157 to 0.249
for the left and right hand, respectively
EXplainable Artificial Intelligence: enabling AI in neurosciences and beyond
The adoption of AI models in medicine and neurosciences has the potential to play a significant role not only in bringing scientific advancements but also in clinical decision-making. However, concerns mounts due to the eventual biases AI could have which could result in far-reaching consequences particularly in a critical field like biomedicine. It is challenging to achieve usable intelligence because not only it is fundamental to learn from prior data, extract knowledge and guarantee generalization capabilities, but also to disentangle the underlying explanatory factors in order to deeply understand the variables leading to the final decisions. There hence has been a call for approaches to open the AI `black box' to increase trust and reliability on the decision-making capabilities of AI algorithms. Such approaches are commonly referred to as XAI and are starting to be applied in medical fields even if not yet fully exploited. With this thesis we aim at contributing to enabling the use of AI in medicine and neurosciences by taking two fundamental steps: (i) practically pervade AI models with XAI (ii) Strongly validate XAI models. The first step was achieved on one hand by focusing on XAI taxonomy and proposing some guidelines specific for the AI and XAI applications in the neuroscience domain. On the other hand, we faced concrete issues proposing XAI solutions to decode the brain modulations in neurodegeneration relying on the morphological, microstructural and functional changes occurring at different disease stages as well as their connections with the genotype substrate. The second step was as well achieved by firstly defining four attributes related to XAI validation, namely stability, consistency, understandability and plausibility. Each attribute refers to a different aspect of XAI ranging from the assessment of explanations stability across different XAI methods, or highly collinear inputs, to the alignment of the obtained explanations with the state-of-the-art literature. We then proposed different validation techniques aiming at practically fulfilling such requirements. With this thesis, we contributed to the advancement of the research into XAI aiming at increasing awareness and critical use of AI methods opening the way to real-life applications enabling the development of personalized medicine and treatment by taking a data-driven and objective approach to healthcare
White Matter Integrity as a Biomarker for Stroke Recovery: Implications for TMS Treatment
White matter consists of myelinated axons which integrate information across remote brain regions. Following stroke white matter integrity is often compromised leading to functional impairment and disability. Despite its prevalence among stroke patients the role of white matter in development of post-stroke rehabilitation has been largely ignored. Rehabilitation interventions like repetitive transcranial magnetic stimulation (rTMS) are promising but reports on its efficacy have been conflicting. By understanding the role of white matter integrity in post-stroke motor recovery, brain reorganization and TMS efficacy we may be able to improve the development of future interventions. In this dissertation we set out answer these questions by investigating the relationship between white matter integrity and 1) bimanual motor performance following stroke, 2) cortical laterality following stroke and 3) TMS signal propagation (in a group of cocaine users without stroke). We identified white matter integrity of the corpus callosum as a key structure influencing bimanual performance using kinematic measures of hand symmetry (Chapter 2). Second, we found that reduced white matter integrity of corpus callosum was correlated with loss of functional laterality of the primary motor cortex during movement of the affected hand (Chapter 3). Lastly, we found that reduced white matter tract integrity from the site of stimulation to a downstream subcortical target, was correlated to the ability to modulate that target (Chapter 4). Taken together these studies support white matter integrity as a valuable biomarker for future rTMS trials in stroke. To emphasize the implications of these findings, we provide an example of how to incorporate white matter integrity at multiple levels of rTMS study design
Tensor Regression
Regression analysis is a key area of interest in the field of data analysis
and machine learning which is devoted to exploring the dependencies between
variables, often using vectors. The emergence of high dimensional data in
technologies such as neuroimaging, computer vision, climatology and social
networks, has brought challenges to traditional data representation methods.
Tensors, as high dimensional extensions of vectors, are considered as natural
representations of high dimensional data. In this book, the authors provide a
systematic study and analysis of tensor-based regression models and their
applications in recent years. It groups and illustrates the existing
tensor-based regression methods and covers the basics, core ideas, and
theoretical characteristics of most tensor-based regression methods. In
addition, readers can learn how to use existing tensor-based regression methods
to solve specific regression tasks with multiway data, what datasets can be
selected, and what software packages are available to start related work as
soon as possible. Tensor Regression is the first thorough overview of the
fundamentals, motivations, popular algorithms, strategies for efficient
implementation, related applications, available datasets, and software
resources for tensor-based regression analysis. It is essential reading for all
students, researchers and practitioners of working on high dimensional data.Comment: 187 pages, 32 figures, 10 table
- …