13 research outputs found

    Impact of Visual Repetition Rate on Intrinsic Properties of Low Frequency Fluctuations in the Visual Network

    Get PDF
    BACKGROUND: Visual processing network is one of the functional networks which have been reliably identified to consistently exist in human resting brains. In our work, we focused on this network and investigated the intrinsic properties of low frequency (0.01-0.08 Hz) fluctuations (LFFs) during changes of visual stimuli. There were two main questions to be discussed in this study: intrinsic properties of LFFs regarding (1) interactions between visual stimuli and resting-state; (2) impact of repetition rate of visual stimuli. METHODOLOGY/PRINCIPAL FINDINGS: We analyzed scanning sessions that contained rest and visual stimuli in various repetition rates with a novel method. The method included three numerical approaches involving ICA (Independent Component Analyses), fALFF (fractional Amplitude of Low Frequency Fluctuation), and Coherence, to respectively investigate the modulations of visual network pattern, low frequency fluctuation power, and interregional functional connectivity during changes of visual stimuli. We discovered when resting-state was replaced by visual stimuli, more areas were involved in visual processing, and both stronger low frequency fluctuations and higher interregional functional connectivity occurred in visual network. With changes of visual repetition rate, the number of areas which were involved in visual processing, low frequency fluctuation power, and interregional functional connectivity in this network were also modulated. CONCLUSIONS/SIGNIFICANCE: To combine the results of prior literatures and our discoveries, intrinsic properties of LFFs in visual network are altered not only by modulations of endogenous factors (eye-open or eye-closed condition; alcohol administration) and disordered behaviors (early blind), but also exogenous sensory stimuli (visual stimuli with various repetition rates). It demonstrates that the intrinsic properties of LFFs are valuable to represent physiological states of human brains

    Investigating the Use of Support Vector Machine Classification on Structural Brain Images of Preterm–Born Teenagers as a Biological Marker

    Get PDF
    Preterm birth has been shown to induce an altered developmental trajectory of brain structure and function. With the aid support vector machine (SVM) classification methods we aimed to investigate whether MRI data, collected in adolescence, could be used to predict whether an individual had been born preterm or at term. To this end we collected T1–weighted anatomical MRI data from 143 individuals (69 controls, mean age 14.6y). The inclusion criteria for those born preterm were birth weight ≤ 1500g and gestational age < 37w. A linear SVM was trained on the grey matter segment of MR images in two different ways. First, all the individuals were used for training and classification was performed by the leave–one–out method, yielding 93% correct classification (sensitivity = 0.905, specificity = 0.942). Separately, a random half of the available data were used for training twice and each time the other, unseen, half of the data was classified, resulting 86% and 91% accurate classifications. Both gestational age (R = –0.24, p<0.04) and birth weight (R = –0.51, p < 0.001) correlated with the distance to decision boundary within the group of individuals born preterm. Statistically significant correlations were also found between IQ (R = –0.30, p < 0.001) and the distance to decision boundary. Those born small for gestational age did not form a separate subgroup in these analyses. The high rate of correct classification by the SVM motivates further investigation. The long–term goal is to automatically and non–invasively predict the outcome of preterm–born individuals on an individual basis using as early a scan as possible

    Brain responses to biological motion predict treatment outcome in young children with autism

    Get PDF
    Autism spectrum disorders (ASDs) are common yet complex neurodevelopmental disorders, characterized by social, communication and behavioral deficits. Behavioral interventions have shown favorable results—however, the promise of precision medicine in ASD is hampered by a lack of sensitive, objective neurobiological markers (neurobiomarkers) to identify subgroups of young children likely to respond to specific treatments. Such neurobiomarkers are essential because early childhood provides a sensitive window of opportunity for intervention, while unsuccessful intervention is costly to children, families and society. In young children with ASD, we show that functional magnetic resonance imaging-based stratification neurobiomarkers accurately predict responses to an evidence-based behavioral treatment—pivotal response treatment. Neural predictors were identified in the pretreatment levels of activity in response to biological vs scrambled motion in the neural circuits that support social information processing (superior temporal sulcus, fusiform gyrus, amygdala, inferior parietal cortex and superior parietal lobule) and social motivation/reward (orbitofrontal cortex, insula, putamen, pallidum and ventral striatum). The predictive value of our findings for individual children with ASD was supported by a multivariate pattern analysis with cross validation. Predicting who will respond to a particular treatment for ASD, we believe the current findings mark the very first evidence of prediction/stratification biomarkers in young children with ASD. The implications of the findings are far reaching and should greatly accelerate progress toward more precise and effective treatments for core deficits in ASD

    Movies and meaning: from low-level features to mind reading

    Get PDF
    When dealing with movies, closing the tremendous discontinuity between low-level features and the richness of semantics in the viewers' cognitive processes, requires a variety of approaches and different perspectives. For instance when attempting to relate movie content to users' affective responses, previous work suggests that a direct mapping of audio-visual properties into elicited emotions is difficult, due to the high variability of individual reactions. To reduce the gap between the objective level of features and the subjective sphere of emotions, we exploit the intermediate representation of the connotative properties of movies: the set of shooting and editing conventions that help in transmitting meaning to the audience. One of these stylistic feature, the shot scale, i.e. the distance of the camera from the subject, effectively regulates theory of mind, indicating that increasing spatial proximity to the character triggers higher occurrence of mental state references in viewers' story descriptions. Movies are also becoming an important stimuli employed in neural decoding, an ambitious line of research within contemporary neuroscience aiming at "mindreading". In this field we address the challenge of producing decoding models for the reconstruction of perceptual contents by combining fMRI data and deep features in a hybrid model able to predict specific video object classes

    Evolving Spatio-temporal Data Machines Based on the NeuCube Neuromorphic Framework: Design Methodology and Selected Applications

    Get PDF
    The paper describes a new type of evolving connectionist systems (ECOS) called evolving spatio-temporal data machines based on neuromorphic, brain-like information processing principles (eSTDM). These are multi-modular computer systems designed to deal with large and fast spatio/spectro temporal data using spiking neural networks (SNN) as major processing modules. ECOS and eSTDM in particular can learn incrementally from data streams, can include ‘on the fly’ new input variables, new output class labels or regression outputs, can continuously adapt their structure and functionality, can be visualised and interpreted for new knowledge discovery and for a better understanding of the data and the processes that generated it. eSTDM can be used for early event prediction due to the ability of the SNN to spike early, before whole input vectors (they were trained on) are presented. A framework for building eSTDM called NeuCube along with a design methodology for building eSTDM using this are presented. The implementation of this framework in MATLAB, Java, and PyNN (Python) is presented. The latter facilitates the use of neuromorphic hardware platforms to run the eSTDM. Selected examples are given of eSTDM for pattern recognition and early event prediction on EEG data, fMRI data, multisensory seismic data, ecological data, climate data, audio-visual data. Future directions are discussed, including extension of the NeuCube framework for building neurogenetic eSTDM and also new applications of eSTDM

    Support vector machine-based classification of neuroimages in Alzheimer’s disease: direct comparison of FDG-PET, rCBF-SPECT and MRI data acquired from the same individuals

    Get PDF
    OBJECTIVE: To conduct the first support vector machine (SVM)-based study comparing the diagnostic accuracy of T1-weighted magnetic resonance imaging (T1-MRI), F-fluorodeoxyglucose-positron emission tomography (FDG-PET) and regional cerebral blood flow single-photon emission computed tomography (rCBF-SPECT) in Alzheimer's disease (AD). METHOD: Brain T1-MRI, FDG-PET and rCBF-SPECT scans were acquired from a sample of mild AD patients (n=20) and healthy elderly controls (n=18). SVM-based diagnostic accuracy indices were calculated using whole-brain information and leave-one-out cross-validation. RESULTS: The accuracy obtained using PET and SPECT data were similar. PET accuracy was 68∼71% and area under curve (AUC) 0.77∼0.81; SPECT accuracy was 68∼74% and AUC 0.75∼0.79, and both had better performance than analysis with T1-MRI data (accuracy of 58%, AUC 0.67). The addition of PET or SPECT to MRI produced higher accuracy indices (68∼74%; AUC: 0.74∼0.82) than T1-MRI alone, but these were not clearly superior to the isolated neurofunctional modalities. CONCLUSION: In line with previous evidence, FDG-PET and rCBF-SPECT more accurately identified patients with AD than T1-MRI, and the addition of either PET or SPECT to T1-MRI data yielded increased accuracy. The comparable SPECT and PET performances, directly demonstrated for the first time in the present study, support the view that rCBF-SPECT still has a role to play in AD diagnosis

    Generative Embedding for Model-Based Classification of fMRI Data

    Get PDF
    Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI). The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs) rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in ‘hidden’ physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs) and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and correlation-based methods. This example demonstrates how disease states can be detected with very high accuracy and, at the same time, be interpreted mechanistically in terms of abnormalities in connectivity. We envisage that future applications of generative embedding may provide crucial advances in dissecting spectrum disorders into physiologically more well-defined subgroups

    A comparison of various MRI feature types for characterizing whole brain anatomical differences using linear pattern recognition methods

    Get PDF
    There is a widespread interest in applying pattern recognition methods to anatomical neuroimaging data, but so far, there has been relatively little investigation into how best to derive image features in order to make the most accurate predictions. In this work, a Gaussian Process machine learning approach was used for predicting age, gender and body mass index (BMI) of subjects in the IXI dataset, as well as age, gender and diagnostic status using the ABIDE and COBRE datasets. MRI data were segmented and aligned using SPM12, and a variety of feature representations were derived from this preprocessing. We compared classification and regression accuracy using the different sorts of features, and with various degrees of spatial smoothing. Results suggested that feature sets that did not ignore the implicit background tissue class, tended to result in better overall performance, whereas some of the most commonly used feature sets performed relatively poorly

    Yerel Voksel Ağları Ile Beyin Verisi Üzerinden Bilişsel Süreçlerin Modellenmesi Ve Otomatik Olarak Tanınması

    Get PDF
    TÜBİTAK EEEAG Proje01.10.2015Yapay zeka konusunda çalışan araştırmacılar, insan zekasınından esinlenerek, insan zekasına benzeyen yapay sistemleri geliştirmeye çalışmaktadır. Temel amaç insan gibi düşünen, insan gibi öğrenen ve problem çözebilen makinalar yapabilmektir. Bu sistemlerle ilgili yazılım ve donanımları geliştirebilmek üzere birçok matematiksel yöntem ve algoritmalar geliştirilmiştir. Bu yöntemler sayesinde nesneleri algılama, tanıma, sınıflama ve öğrenme gibi önemli bilişsel süreçlerin matematiksel modelleri oluşturulmuştur. Bu projede, yapay zeka algoritmaları için geliştirilen yöntemleri kullanarak, bilişsel süreçleri (örneğin bellek, öğrenme, duygulanım) modelledik. Diğer bir deyişle, bugüne kadar insan zekasını taklit ederek geliştirdiğimiz yapay zeka yöntemlerini tersinir bir mühendislikle geri çevirerek doğal zekayı modellemek üzere kullandık. Modellerimizi bilişsel süreçler sırasında kaydededilen fonksiyonel Manyetik Resonans Görüntüleri (fMRG) kullanarak oluşturduk. Bunun için, bir dizi bilişsel deney tasarladık ve bu deneyleri denekler üzerinde uygularken fMRG sinyallerini kaydettik. Böylece, elde ettiğimiz etiketli öğrenme verilerini geliştirdiğimiz makine öğrenme algoritmalarını eğitmek için kullandık. Projenin en önemli çıktılarından birisi de oluşturulan modellerin ve elde edilen fMRG verilerinin, web tabanlı bir ortamda tüm araştırmacıların hizmetine açılmasıdır. Böylece oluşturulduğumuz metodoloji ve programları nörobilimciler veri analizinde kullanabilecekler ve kendi fMRG verilerini de modelleyebileceklerdir. Proje kapsamında geliştirdiğimiz matematiksel beyin modeline Yerel Voksel Ağları (YVA) adını verdik. İnsan-bilgisayar etkileşimi teknolojileri için bilimsel bir altyapı oluşturma potansiyeline sahip olan bu model, iki farklı veri seti üzerinde gerçekleştirilen deneylerde, literatürde yoğun olarak kullanılan diğer Çoklu Voksel Örüntü Analizi (ÇVÖA) yöntemlerine göre daha başarılı sonuçlar ortaya koyduğu anlaşılmıştır. Yerel Voksel Ağları yöntemi, fMRG sinyalllerinin en küçük birimi olan voksellerden elde edilen zaman serileri arasındaki ilişkiyi doğrusal denklemlerle modellemektedir. İnsan beyninde, birbirine yakın nöronların benzer aktiviteler gösterdiği bilinmektedir. Bu yerel benzerlik, birbirine yakın vokseller arasındaki ilişkinin doğrusal olduğunu göstermektedir. Bazen de birbirinden uzak vokseller de beyindeki direk yollar aracılığı ile benzer aktivasyonlara sahip olabilmektedir. Bu tür vokseller yerel olarak değil de fonksiyonel olarak birbirine komşu kabul edilebilir. Geliştirmiş olduğumuz bu yeni komşuluk sistemine fonksiyonel komşuluk adını verdik. Ve vokseller arasındaki doğrusal ilişkiyi, “Fonksiyonel Komşuluk” adını verdiğimiz bu ölçütü kullanarak modelledik. YVA modelinin, FMRG sinyallerinden gelen ham özniteliklere göre, bilişsel süreçleri etiketleme konusunda daha başarılı olduğunu deneysel olarak gösterdik. Proje kapsamında öncelikle fMRG verisi toplamak üzere bir dizi deney tasarladık ve bu deneyleri Bilkent Üniversitesi UMRAM merkezinde gerçekleştirdik. Bunun için insan beyninin bilgileri nasıl depolandığını ve nesneleri nasıl tanıdığını anlamak üzere 3 değişik deney yaptık. Bu deneylerde değişik nesnelerin tanındığı etiketli fMRG verileri elde ettik. Daha sonra verileri işleyerek gürültülerden arındırmaya çalıştık ve modelimiz için uygun hale getirdik. Kullandığımız görüntü iyileştirme teknikleri ile, YVA yönteminde elde edilen etiketleme performansını arttırdık. YVA yöntemi sonucunda elde edilen beyin ağını kullanılarak birçok öznitelik oluşturulabilir. Biz bu çalışmada YVA’ dan elde ettiğimiz kenar ağırlıklarını öznitelik olarak kullanarak Yapay Sinir Ağları, Destek Vektör Makinaları ve k-En Yakın komşuluk gibi çeşitli sınıflandırıcıları eğittik. Bunların içinde en başarılı olanları seçerek performansları ölçtük. Yöntemin başarılı sonuçlar vermesi neticesinde, doğrusal ilişki özniteliklerinin çıkarılma adımının hızlandırılması için GPU programlama tekniklerinden faydalandık. Son olarak da ortaya çıkarılan özniteliklerin beyin modeli üzerinde bilim insanlarına sunulması için, proje kapsamında bir kullanıcı ara yüzü geliştirdik. Böylece, oluşturduğumuz beyin ağlarını bilim insanlarının kullanımına sunmayı hedefledik.Researchers are higly inspired from the human brain to develop intelligent algorithms. The main goal of these algorithms is to build a machine which is capable of learning and solving problems as human do. Numerous mathematical models and algorithms have been developed to enhance the software and hardwares of these intelligent systems. Recently, well-established mathematical models have been created for object detection, recognition and classification tasks. In this project, we modeled the cognitive states, e.g. memory, learning, emotion, using methods developed for intelligence systems. In other words, we employed the models in artificial intelligence, which are developed by imitating the human intelligence, to model the human brain itself. The proposed models are based on the functional magnetic resonance imaging data (fMRG) recorded during different cognitive processes. In order to gather these data, we have also designed cognitive experiments and applied them on participants. Therefore, we have used the labeled dataset to train machine learning algorithms suggested in the project. The most important outcome of the project is that all the models and programs are shared with other researchers through the web. Thereby, developed methodologies and programs can be used by other neuroscientists to analyze and model their own data. Our proposed model, called “Local Mesh Model”, has a potential to be a basis for human-computer interaction systems. After the experiments done on two different dataset, it is observed that Local Mesh Model gives better results than heavy-duty MVPA methods used in the literature. “Local Mesh Model” method assumes a linear interactions between the local voxels, which are the smallest unit of fMRG signals. It is known that, locally close voxels have similar time series for the same types of stimuli. The local similarity indicates a linear relation between nearby voxels. It is also known that, some voxels in the brain have similar activation patterns with distant voxels due to the direct links between some groups of neurons. These voxels can be accepted as functional neighbours. We define a new neighborhood system, called functional neighborhood and define functional meshes to model the linear interactions among the voxels in the same functional neighborhood. Experimental results obtained for “Local Mesh Model” established in functional neighborhood is more successful than raw fMRI intensity values to label cognitive states. In the project, we first design a set of cognitive experiments. Then we record the fMRI data in Bilkent University UMRAM research center. Three different experiments are applied to understand how the information is stored and object is recognized in the brain. Labeled fMRG data are obtained while participants discriminating different objects. After the raw data is recorded, image processing techniques are applied to denoise the data and make them proper for our models. Results show us that these techniques help us to improve our labelling performance. Using the brain network obtained by estimating the “Local Mesh Model”, we can define many different features. In this project, we used arc weight of these network as a feature to train Neural Network, Support Vector Machines and k-NN classifiers. We used the most successful classifiers to measure performances of our models. We also improve the speed of model creation using GPU programming techniques. As a final contribution, we developed a user interface to serve our models and features to other researchers. We aim to share created brain network with other researchers

    Neural Coding for Effective Rehabilitation

    Get PDF
    corecore