137 research outputs found

    Infinite Feature Selection on Shore-Based Biomarkers Reveals Connectivity Modulation after Stroke

    Get PDF
    Connectomics is gaining increasing interest in the scientific and clinical communities. It consists in deriving models of structural or functional brain connections based on some local measures. Here we focus on structural connectivity as detected by diffusion MRI. Connectivity matrices are derived from microstructural indices obtained by the 3D-SHORE. Typically, graphs are derived from connectivity matrices and used for inferring node properties that allow identifying those nodes that play a prominent role in the network. This information can then be used to detect network modulations induced by diseases. In this paper we take a complementary approach and focus on link as opposed to node properties. We hypothesize that network modulation can be better described by measuring the connectivity alteration directly in the form of modulation of the properties of white matter fiber bundles constituting the network communication backbone. The goal of this paper is to detect the paths that are most altered by the pathology by exploiting a feature selection paradigm. Temporal changes on connection weights are treated as features and those playing a leading role in a patient versus healthy controls classification task are detected by the Infinite Feature Selection (Inf-FS) method. Results show that connection paths with high discriminative power can be identified that are shared by the considered microstructural descriptors allowing a classification accuracy ranging between 83% and 89%

    Studying brain connectivity: a new multimodal approach for structure and function integration \u200b

    Get PDF
    Il cervello \ue8 un sistema che integra organizzazioni anatomiche e funzionali. Negli ultimi dieci anni, la comunit\ue0 neuroscientifica si \ue8 posta la domanda sulla relazione struttura-funzione. Essa pu\uf2 essere esplorata attraverso lo studio della connettivit\ue0. Nello specifico, la connettivit\ue0 strutturale pu\uf2 essere definita dal segnale di risonanza magnetica pesato in diffusione seguito dalla computazione della trattografia; mentre la correlazione funzionale del cervello pu\uf2 essere calcolata a partire da diversi segnali, come la risonanza magnetica funzionale o l\u2019elettro-/magneto-encefalografia, che consente la cattura del segnale di attivazione cerebrale a una risoluzione temporale pi\uf9 elevata. Recentemente, la relazione struttura-funzione \ue8 stata esplorata utilizzando strumenti di elaborazione del segnale sui grafi, che estendono e generalizzano le operazioni di elaborazione del segnale ai grafi. In specifico, alcuni studi utilizzano la trasformata di Fourier applicata alla connettivit\ue0 strutturale per misurare la decomposizione del segnale funzionale in porzioni che si allineano (\u201caligned\u201d) e non si allineano (\u201cliberal\u201d) con la sottostante rete di materia bianca. Il relativo allineamento funzionale con l\u2019anatomia \ue8 stato associato alla flessibilit\ue0 cognitiva, sottolineando forti allineamenti di attivit\ue0 corticali, e suggerendo che i sistemi sottocorticali contengono pi\uf9 segnali liberi rispetto alla corteccia. Queste relazioni multimodali non sono, per\uf2, ancora chiare per segnali con elevata risoluzione temporale, oltre ad essere ristretti a specifiche zone cerebrali. Oltretutto, al giorno d'oggi la ricostruzione della trattografia \ue8 ancora un argomento impegnativo, soprattutto se utilizzata per l'estrazione della connettivit\ue0 strutturale. Nel corso dell'ultimo decennio si \ue8 vista una proliferazione di nuovi modelli per ricostruire la trattografia, ma il loro conseguente effetto sullo strumento di connettivit\ue0 non \ue8 ancora chiaro. In questa tesi, ho districato i dubbi sulla variabilit\ue0 dei trattogrammi derivati da diversi metodi di trattografia, confrontandoli con un paradigma di test-retest, che consente di definire la specificit\ue0 e la sensibilit\ue0 di ciascun modello. Ho cercato di trovare un compromesso tra queste, per definire un miglior metodo trattografico. Inoltre, ho affrontato il problema dei grafi pesati confrontando alcune possibili stime, evidenziando la sufficienza della connettivit\ue0 binaria e la potenza delle propriet\ue0 microstrutturali di nuova generazione nelle applicazioni cliniche. Qui, ho sviluppato un modello di proiezione che consente l'uso dei filtri aligned e liberal per i segnali di encefalografia. Il modello estende i vincoli strutturali per considerare le connessioni indirette, che recentemente si sono dimostrate utili nella relazione struttura-funzione. I risultati preliminari del nuovo modello indicano un\u2019implicazione dinamica di momenti pi\uf9 aligned e momenti pi\uf9 liberal, evidenziando le fluttuazioni presenti nello stato di riposo. Inoltre, viene presentata una relazione specifica di periodi pi\uf9 allineati e liberali per il paradigma motorio. Questo modello apre la prospettiva alla definizione di nuovi biomarcatori. Considerando che l\u2019encefalografia \ue8 spesso usata nelle applicazioni cliniche, questa integrazione multimodale applicata su dati di Parkinson o di ictus potrebbe combinare le informazioni dei cambiamenti strutturali e funzionali nelle connessioni cerebrali, che al momento sono state dimostrate individualmente.The brain is a complex system of which anatomical and functional organization is both segregated and integrated. A longstanding question for the neuroscience community has been to elucidate the mutual influences between structure and function. To that aim, first, structural and functional connectivity need to be explored individually. Structural connectivity can be measured by the Diffusion Magnetic Resonance signal followed by successive computational steps up to virtual tractography. Functional connectivity can be established by correlation between the brain activity time courses measured by different modalities, such as functional Magnetic Resonance Imaging or Electro/Magneto Encephalography. Recently, the Graph Signal Processing (GSP) framework has provided a new way to jointly analyse structure and function. In particular, this framework extends and generalizes many classical signal-processing operations to graphs (e.g., spectral analysis, filtering, and so on). The graph here is built by the structural connectome; i.e., the anatomical backbone of the brain where nodes represent brain regions and edge weights strength of structural connectivity. The functional signals are considered as time-dependent graph signals; i.e., measures associated to the nodes of the graph. The concept of the Graph Fourier Transform then allows decomposing regional functional signals into, on one side, a portion that strongly aligned with the underlying structural network (\u201caligned"), and, on the other side, a portion that is not well aligned with structure (\u201cliberal"). The proportion of aligned-vs-liberal energy in functional signals has been associated with cognitive flexibility. However, the interpretation of these multimodal relationships is still limited and unexplored for higher temporal resolution functional signals such as M/EEG. Moreover, the construction of the structural connectome itself using tractography is still a challenging topic, for which, in the last decade, many new advanced models were proposed, but their impact on the connectome remains unclear. In the first part of this thesis, I disentangled the variability of tractograms derived from different tractography methods, comparing them with a test-retest paradigm, which allows to define specificity and sensitivity of each model. I want to find the best trade-off between specificity and sensitivity to define the best model that can be deployed for analysis of functional signals. Moreover, I addressed the issue of weighing the graph comparing few estimates, highlighting the sufficiency of binary connectivity, and the power of the latest-generation microstructural properties in clinical applications. In the second part, I developed a GSP method that allows applying the aligned and liberal filters to M/EEG signals. The model extends the structural constraints to consider indirect connections, which recently demonstrated to be powerful in the structure/function link. I then show that it is possible to identify dynamic changes in aligned-vs-liberal energy, highlighting fluctuations present motor task and resting state. This model opens the perspective of novel biomarkers. Indeed, M/EEG are often used in clinical applications; e.g., multimodal integration in data from Parkinson\u2019s disease or stroke could combine changes of both structural and functional connectivity

    Ranking to Learn: Feature Ranking and Selection via Eigenvector Centrality

    Full text link
    In an era where accumulating data is easy and storing it inexpensive, feature selection plays a central role in helping to reduce the high-dimensionality of huge amounts of otherwise meaningless data. In this paper, we propose a graph-based method for feature selection that ranks features by identifying the most important ones into arbitrary set of cues. Mapping the problem on an affinity graph-where features are the nodes-the solution is given by assessing the importance of nodes through some indicators of centrality, in particular, the Eigen-vector Centrality (EC). The gist of EC is to estimate the importance of a feature as a function of the importance of its neighbors. Ranking central nodes individuates candidate features, which turn out to be effective from a classification point of view, as proved by a thoroughly experimental section. Our approach has been tested on 7 diverse datasets from recent literature (e.g., biological data and object recognition, among others), and compared against filter, embedded and wrappers methods. The results are remarkable in terms of accuracy, stability and low execution time.Comment: Preprint version - Lecture Notes in Computer Science - Springer 201

    Brain Microstructure: Impact of the Permeability on Diffusion MRI

    Get PDF
    Diffusion Magnetic Resonance Imaging (dMRI) enables a non invasive in-vivo characterization of the brain tissue. The disentanglement of each microstructural property reflected on the total dMRI signal is one of the hottest topics in the field. The dMRI reconstruction techniques ground on assumptions on the signal model and consider the neurons axons as impermeable cylinders. Nevertheless, interactions with the environment is characteristic of the biological life and diffusional water exchange takes place through cell membranes. Myelin wraps axons with multiple layers constitute a barrier modulating exchange between the axon and the extracellular tissue. Due to the short transverse relaxation time (T2) of water trapped between sheets, myelin contribution to the diffusion signal is often neglected. This thesis aims to explore how the exchange influences the dMRI signal and how this can be informative on myelin structure. We also aimed to explore how recent dMRI signal reconstruction techniques could be applied in clinics proposing a strategy for investigating the potential as biomarkers of the derived tissue descriptors. The first goal of the thesis was addressed performing Monte Carlo simulations of a system with three compartments: intra-axonal, spiraling myelin and extra-axonal. The experiments showed that the exchange time between intra- and extra-axonal compartments was on the sub-second level (and thus possibly observable) for geometries with small axon diameter and low number of wraps such as in the infant brain and in demyelinating diseases. The second goal of the thesis was reached by assessing the indices derived from three dimensional simple harmonics oscillator-based reconstruction and estimation (3D-SHORE) in stroke disease. The tract-based analysis involving motor networks and the region-based analysis in grey matter (GM) were performed. 3D-SHORE indices proved to be sensitive to plasticity in both white matter (WM) and GM, highlighting their viability as biomarkers in ischemic stroke. The overall study could be considered the starting point for a future investigation of the interdependence of different phenomena like exchange and relaxation related to the established dMRI indices. This is valuable for the accurate dMRI data interpretation in heterogeneous tissues and different physiological conditions

    Towards an Understanding of Tinnitus Heterogeneity

    Get PDF

    The Developmental Trajectory of Contour Integration in Autism Spectrum Disorders

    Full text link
    Sensory input is inherently ambiguous and complex, so perception is believed to be achieved by combining incoming sensory information with prior knowledge. One model envisions the grouping of sensory features (the local dimensions of stimuli) to be the outcome of a predictive process relying on prior experience (the global dimension of stimuli) to disambiguate possible configurations those elements could take. Contour integration, the linking of aligned but separate visual elements, is one example of perceptual grouping. Kanizsa-type illusory contour (IC) stimuli have been widely used to explore contour integration processing. Consisting of two conditions which differ only in the alignment of their inducing elements, one induces the experience of a shape apparently defined by a contour and the second does not. This contour has no counterpart in actual visual space – it is the visual system that fills-in the gap between inducing elements. A well-tested electrophysiological index associated with this process (the IC-effect) provided us with a metric of the visual system’s contribution to contour integration. Using visually evoked potentials (VEP), we began by probing the limits of this metric to three manipulations of contour parameters previously shown to impact subjective experience of illusion strength. Next we detailed the developmental trajectory of contour integration processes over childhood and adolescence. Finally, because persons with autism spectrum disorders (ASDs) have demonstrated an altered balance of global and local processing, we hypothesized that contour integration may be atypical. We compared typical development to development in persons with ASDs to reveal possible mechanisms underlying this processing difference. Our manipulations resulted in no differences in the strength of the IC-effect in adults or children in either group. However, timing of the IC-effect was delayed in two instances: 1) peak latency was delayed by increasing the extent of contour to be filled-in relative to overall IC size and 2) onset latency was delayed in participants with ASDs relative to their neurotypical counterparts

    Ranking to Learn and Learning to Rank: On the Role of Ranking in Pattern Recognition Applications

    Get PDF
    The last decade has seen a revolution in the theory and application of machine learning and pattern recognition. Through these advancements, variable ranking has emerged as an active and growing research area and it is now beginning to be applied to many new problems. The rationale behind this fact is that many pattern recognition problems are by nature ranking problems. The main objective of a ranking algorithm is to sort objects according to some criteria, so that, the most relevant items will appear early in the produced result list. Ranking methods can be analyzed from two different methodological perspectives: ranking to learn and learning to rank. The former aims at studying methods and techniques to sort objects for improving the accuracy of a machine learning model. Enhancing a model performance can be challenging at times. For example, in pattern classification tasks, different data representations can complicate and hide the different explanatory factors of variation behind the data. In particular, hand-crafted features contain many cues that are either redundant or irrelevant, which turn out to reduce the overall accuracy of the classifier. In such a case feature selection is used, that, by producing ranked lists of features, helps to filter out the unwanted information. Moreover, in real-time systems (e.g., visual trackers) ranking approaches are used as optimization procedures which improve the robustness of the system that deals with the high variability of the image streams that change over time. The other way around, learning to rank is necessary in the construction of ranking models for information retrieval, biometric authentication, re-identification, and recommender systems. In this context, the ranking model's purpose is to sort objects according to their degrees of relevance, importance, or preference as defined in the specific application.Comment: European PhD Thesis. arXiv admin note: text overlap with arXiv:1601.06615, arXiv:1505.06821, arXiv:1704.02665 by other author
    • …
    corecore