197 research outputs found
Through their eyes: multi-subject Brain Decoding with simple alignment techniques
Previous brain decoding research primarily involves single-subject studies,
reconstructing stimuli via fMRI activity from the same subject. Our study aims
to introduce a generalization technique for cross-subject brain decoding,
facilitated by exploring data alignment methods. We utilized the NSD dataset, a
comprehensive 7T fMRI vision experiment involving multiple subjects exposed to
9841 images, 982 of which were viewed by all. Our approach involved training a
decoding model on one subject, aligning others' data to this space, and testing
the decoding on the second subject. We compared ridge regression, hyper
alignment, and anatomical alignment techniques for fMRI data alignment. We
established that cross-subject brain decoding is feasible, even using around
10% of the total data, or 982 common images, with comparable performance to
single-subject decoding. Ridge regression was the best method for functional
alignment. Through subject alignment, we achieved superior brain decoding and a
potential 90% reduction in scan time. This could pave the way for more
efficient experiments and further advancements in the field, typically
requiring an exorbitant 20-hour scan time per subject
Decoding visual brain representations from electroencephalography through Knowledge Distillation and latent diffusion models
Decoding visual representations from human brain activity has emerged as a
thriving research domain, particularly in the context of brain-computer
interfaces. Our study presents an innovative method that employs to classify
and reconstruct images from the ImageNet dataset using electroencephalography
(EEG) data from subjects that had viewed the images themselves (i.e. "brain
decoding"). We analyzed EEG recordings from 6 participants, each exposed to 50
images spanning 40 unique semantic categories. These EEG readings were
converted into spectrograms, which were then used to train a convolutional
neural network (CNN), integrated with a knowledge distillation procedure based
on a pre-trained Contrastive Language-Image Pre-Training (CLIP)-based image
classification teacher network. This strategy allowed our model to attain a
top-5 accuracy of 80%, significantly outperforming a standard CNN and various
RNN-based benchmarks. Additionally, we incorporated an image reconstruction
mechanism based on pre-trained latent diffusion models, which allowed us to
generate an estimate of the images which had elicited EEG activity. Therefore,
our architecture not only decodes images from neural activity but also offers a
credible image reconstruction from EEG only, paving the way for e.g. swift,
individualized feedback experiments. Our research represents a significant step
forward in connecting neural signals with visual cognition
4Ward: a Relayering Strategy for Efficient Training of Arbitrarily Complex Directed Acyclic Graphs
Thanks to their ease of implementation, multilayer perceptrons (MLPs) have
become ubiquitous in deep learning applications. The graph underlying an MLP is
indeed multipartite, i.e. each layer of neurons only connects to neurons
belonging to the adjacent layer. In contrast, in vivo brain connectomes at the
level of individual synapses suggest that biological neuronal networks are
characterized by scale-free degree distributions or exponentially truncated
power law strength distributions, hinting at potentially novel avenues for the
exploitation of evolution-derived neuronal networks. In this paper, we present
``4Ward'', a method and Python library capable of generating flexible and
efficient neural networks (NNs) from arbitrarily complex directed acyclic
graphs. 4Ward is inspired by layering algorithms drawn from the graph drawing
discipline to implement efficient forward passes, and provides significant time
gains in computational experiments with various Erd\H{o}s-R\'enyi graphs. 4Ward
not only overcomes the sequential nature of the learning matrix method, by
parallelizing the computation of activations, but also addresses the
scalability issues encountered in the current state-of-the-art and provides the
designer with freedom to customize weight initialization and activation
functions. Our algorithm can be of aid for any investigator seeking to exploit
complex topologies in a NN design framework at the microscale
Beyond Multilayer Perceptrons: Investigating Complex Topologies in Neural Networks
In this study, we explore the impact of network topology on the approximation
capabilities of artificial neural networks (ANNs), with a particular focus on
complex topologies. We propose a novel methodology for constructing complex
ANNs based on various topologies, including Barab\'asi-Albert,
Erd\H{o}s-R\'enyi, Watts-Strogatz, and multilayer perceptrons (MLPs). The
constructed networks are evaluated on synthetic datasets generated from
manifold learning generators, with varying levels of task difficulty and noise.
Our findings reveal that complex topologies lead to superior performance in
high-difficulty regimes compared to traditional MLPs. This performance
advantage is attributed to the ability of complex networks to exploit the
compositionality of the underlying target function. However, this benefit comes
at the cost of increased forward-pass computation time and reduced robustness
to graph damage. Additionally, we investigate the relationship between various
topological attributes and model performance. Our analysis shows that no single
attribute can account for the observed performance differences, suggesting that
the influence of network topology on approximation capabilities may be more
intricate than a simple correlation with individual topological attributes. Our
study sheds light on the potential of complex topologies for enhancing the
performance of ANNs and provides a foundation for future research exploring the
interplay between multiple topological attributes and their impact on model
performance
Comparing indirect encodings by evolutionary attractor analysis in the trait space of modular robots
In evolutionary robotics, the representation of the robot is of primary importance. Often indirect encodings are used, whereby a complex developmental process grows a body and a brain from a genotype. In this work, we aim at improving the interpretability of robot morphologies and behaviours resulting from indirect encoding. We develop and use a methodology that focuses on the analysis of evolutionary attractors, represented in what we call the trait space: Using trait descriptors defined in the literature, we define morphological and behavioural Cartesian planes where we project the phenotype of the final population. In our experiments we show that, using this analysis method, we are able to better discern the effect of encodings that differ only in minor details
Chapter Longitudinal profile of a set of biomarkers in predicting Covid-19 mortality using joint models
In survival analysis, time-varying covariates are endogenous when their measurements are directly related to the event status and incomplete information occur at random points during the follow-up. Consequently, the time-dependent Cox model leads to biased estimates. Joint models (JM) allow to correctly estimate these associations combining a survival and longitudinal sub-models by means of a shared parameter (i.e., random effects of the longitudinal sub-model are inserted in the survival one). This study aims at showing the use of JM to evaluate the association between a set of inflammatory biomarkers and Covid-19 mortality. During Covid-19 pandemic, physicians at Istituto Clinico di Città Studi in Milan collected biomarkers (endogenous time-varying covariates) to understand what might be used as prognostic factors for mortality. Furthermore, in the first epidemic outbreak, physicians did not have standard clinical protocols for management of Covid-19 disease and measurements of biomarkers were highly incomplete especially at the baseline. Between February and March 2020, a total of 403 COVID-19 patients were admitted. Baseline characteristics included sex and age, whereas biomarkers measurements, during hospital stay, included log-ferritin, log-lymphocytes, log-neutrophil granulocytes, log-C-reactive protein, glucose and LDH. A Bayesian approach using Markov chain Monte Carlo algorithm were used for fitting JM. Independent and non-informative priors for the fixed effects (age and sex) and for shared parameters were used. Hazard ratios (HR) from a (biased) time-dependent Cox and joint models for log-ferritin levels were 2.10 (1.67-2.64) and 1.73 (1.38-2.20), respectively. In multivariable JM, doubling of biomarker levels resulted in a significantly increase of mortality risk for log-neutrophil granulocytes, HR=1.78 (1.16-2.69); for log-C-reactive protein, HR=1.44 (1.13-1.83); and for LDH, HR=1.28 (1.09-1.49). Increasing of 100 mg/dl of glucose resulted in a HR=2.44 (1.28-4.26). Age, however, showed the strongest effect with mortality risk starting to rise from 60 years
Video-based Goniometer Applications for Measuring Knee Joint Angles during Walking in Neurological Patients: A Validity, Reliability and Usability Study
: Easy-to-use evaluation of Range Of Motion (ROM) during walking is necessary to make decisions during neurological rehabilitation programs and during follow-up visits in clinical and remote settings. This study discussed goniometer applications (DrGoniometer and Angles - Video Goniometer) that measure knee joint ROM during walking through smartphone cameras. The primary aim of the study is to test the inter-rater and intra-rater reliability of the collected measurements as well as their concurrent validity with an electro-goniometer. The secondary aim is to evaluate the usability of the two mobile applications. A total of 22 patients with Parkinson's disease (18 males, age 72 (8) years), 22 post-stroke patients (17 males, age 61 (13) years), and as many healthy volunteers (8 males, age 45 (5) years) underwent knee joint ROM evaluations during walking. Clinicians and inexperienced examiners used the two mobile applications to calculate the ROM, and then rated their perceived usability through the System Usability Scale (SUS). Intraclass correlation coefficients (ICC) and correlation coefficients (corr) were calculated. Both applications showed good reliability (ICC > 0.69) and validity (corr > 0.61), and acceptable usability (SUS > 68). Smartphone-based video goniometers could be used to assess the knee ROM during walking in neurological patients, because of their acceptable degree of reliability, validity and usability
Investigating Visual Perception Impairments through Serious Games and Eye Tracking to Anticipate Handwriting Difficulties
Dysgraphia is a learning disability that causes handwritten production below expectations. Its diagnosis is delayed until the completion of handwriting development. To allow a preventive training program, abilities not directly related to handwriting should be evaluated, and one of them is visual perception. To investigate the role of visual perception in handwriting skills, we gamified standard clinical visual perception tests to be played while wearing an eye tracker at three difficulty levels. Then, we identified children at risk of dysgraphia through the means of a handwriting speed test. Five machine learning models were constructed to predict if the child was at risk, using the CatBoost algorithm with Nested Cross-Validation, with combinations of game performance, eye-tracking, and drawing data as predictors. A total of 53 children participated in the study. The machine learning models obtained good results, particularly with game performances as predictors (F1 score: 0.77 train, 0.71 test). SHAP explainer was used to identify the most impactful features. The game reached an excellent usability score (89.4 +/- 9.6). These results are promising to suggest a new tool for dysgraphia early screening based on visual perception skills
Identification and characterization of learning weakness from drawing analysis at the pre-literacy stage
: Handwriting learning delays should be addressed early to prevent their exacerbation and long-lasting consequences on whole children's lives. Ideally, proper training should start even before learning how to write. This work presents a novel method to disclose potential handwriting problems, from a pre-literacy stage, based on drawings instead of words production analysis. Two hundred forty-one kindergartners drew on a tablet, and we computed features known to be distinctive of poor handwriting from symbols drawings. We verified that abnormal features patterns reflected abnormal drawings, and found correspondence in experts' evaluation of the potential risk of developing a learning delay in the graphical sphere. A machine learning model was able to discriminate with 0.75 sensitivity and 0.76 specificity children at risk. Finally, we explained why children were considered at risk by the algorithms to inform teachers on the specific weaknesses that need training. Thanks to this system, early intervention to train specific learning delays will be finally possible
- …