7 research outputs found
Les Approches Parallèles et Distribués pour l'Apprentissage Semi-supervisé
Two approaches for graph based semi-supervised learning are proposed. The firstapproach is based on iteration of an affine map. A key element of the affine map iteration is sparsematrix-vector multiplication, which has several very efficient parallel implementations. The secondapproach belongs to the class of Markov Chain Monte Carlo (MCMC) algorithms. It is based onsampling of nodes by performing a random walk on the graph. The latter approach is distributedby its nature and can be easily implemented on several processors or over the network. Boththeoretical and practical evaluations are provided. It is found that the nodes are classified intotheir class with very small error. The sampling algorithm's ability to track new incoming nodesand to classify them is also demonstrated.Deux approches pour l'apprentissage semi-supervisé basé sur le graphe de similaritésont proposés. La première approche est basée sur l'itération d'un opérateur affine. Un élémentclé de l'itération de l'opérateur affine est la multiplication vecteur par matrice de type sparse,qui a plusieurs implémentations parallèles très efficaces. La seconde approche appartient à laclasse des algorithmes de Monte-Carlo par chaînes de Markov (MCMC). Elle est basé sur unéchantillonnage de noeuds en effectuant une marche aléatoire sur le graphe de similarité. Cettedernière approche est distribué par sa nature et peut être facilement mis en oeuvre sur plusieursprocesseurs ou sur un réseau. Évaluations théoriques ainsi que pratiques sont fournis. Onconstate que les noeuds sont classés dans leurs classes avec très petite erreur. La capacité del'algorithme MCMC de suivre les nouveaux noeuds arrivants online et de les classer est égalementdémontré
Distributed and Asynchronous Methods for Semi-supervised Learning
International audienceWe propose two asynchronously distributed approaches for graph-based semi-supervised learning. The first approach is based on stochastic approximation, whereas the second approach is based on randomized Kaczmarz algorithm. In addition to the possibility of distributed implementation, both approaches can be naturally applied online to streaming data. We analyse both approaches theoretically and by experiments. It appears that there is no clear winner and we provide indications about cases of superiority for each approach
An Open-Access Model for Parkinson's Disease Progression
Using machine learning, we developed a statistical progression model of early Parkinson’s disease that accounts for medication effects and variability within and between subjects and medication effects. The resulting personalized model can be used to quantitatively describe clinical visits and will be made public, enabling replication and reproducibility
Recommended from our members
Human Verbal Memory Encoding Is Hierarchically Distributed in a Continuous Processing Stream.
Processing of memory is supported by coordinated activity in a network of sensory, association, and motor brain regions. It remains a major challenge to determine where memory is encoded for later retrieval. Here, we used direct intracranial brain recordings from epilepsy patients performing free recall tasks to determine the temporal pattern and anatomical distribution of verbal memory encoding across the entire human cortex. High γ frequency activity (65-115 Hz) showed consistent power responses during encoding of subsequently recalled and forgotten words on a subset of electrodes localized in 16 distinct cortical areas activated in the tasks. More of the high γ power during word encoding, and less power before and after the word presentation, was characteristic of successful recall and observed across multiple brain regions. Latencies of the induced power changes and this subsequent memory effect (SME) between the recalled and forgotten words followed an anatomical sequence from visual to prefrontal cortical areas. Finally, the magnitude of the memory effect was unexpectedly found to be the largest in selected brain regions both at the top and at the bottom of the processing stream. These included the language processing areas of the prefrontal cortex and the early visual areas at the junction of the occipital and temporal lobes. Our results provide evidence for distributed encoding of verbal memory organized along a hierarchical posterior-to-anterior processing stream
A Computationally Efficient Model for Predicting Successful Memory Encoding Using Machine-Learning-based EEG Channel Selection
Computational cost is an important consideration for memory encoding prediction models that use data from dozens of implanted electrodes. We propose a method to reduce computational expense by selecting a subset of all the electrodes to build the prediction model. The electrodes were selected based on their likelihood of measuring brain activity useful for predicting memory encoding better than chance (in terms of AUC). A logistic regression prediction model was built using spectral features of intracranial electroencephalography (iEEG) from the selected electrodes. We demonstrate our method on iEEG data from 37 human subjects performing free recall verbal short-term memory tasks. The method achieves a 36.3% reduction in the number of electrodes used for prediction, resulting in a 64.9 % reduction in inference computation time with just a 0.3 % loss in prediction performance compared to the case when all electrodes were used. The electrodes selected using our method provided improved prediction performance compared to those electrodes that were not selected in 31 out of 37 patients. Building upon this observation, we also developed a method to identify the subjects for whom the proposed electrode selection method would be beneficial