22,851 research outputs found
Learning and comparing functional connectomes across subjects
Functional connectomes capture brain interactions via synchronized
fluctuations in the functional magnetic resonance imaging signal. If measured
during rest, they map the intrinsic functional architecture of the brain. With
task-driven experiments they represent integration mechanisms between
specialized brain areas. Analyzing their variability across subjects and
conditions can reveal markers of brain pathologies and mechanisms underlying
cognition. Methods of estimating functional connectomes from the imaging signal
have undergone rapid developments and the literature is full of diverse
strategies for comparing them. This review aims to clarify links across
functional-connectivity methods as well as to expose different steps to perform
a group study of functional connectomes
Sparse Predictive Structure of Deconvolved Functional Brain Networks
The functional and structural representation of the brain as a complex
network is marked by the fact that the comparison of noisy and intrinsically
correlated high-dimensional structures between experimental conditions or
groups shuns typical mass univariate methods. Furthermore most network
estimation methods cannot distinguish between real and spurious correlation
arising from the convolution due to nodes' interaction, which thus introduces
additional noise in the data. We propose a machine learning pipeline aimed at
identifying multivariate differences between brain networks associated to
different experimental conditions. The pipeline (1) leverages the deconvolved
individual contribution of each edge and (2) maps the task into a sparse
classification problem in order to construct the associated "sparse deconvolved
predictive network", i.e., a graph with the same nodes of those compared but
whose edge weights are defined by their relevance for out of sample predictions
in classification. We present an application of the proposed method by decoding
the covert attention direction (left or right) based on the single-trial
functional connectivity matrix extracted from high-frequency
magnetoencephalography (MEG) data. Our results demonstrate how network
deconvolution matched with sparse classification methods outperforms typical
approaches for MEG decoding
Neuroprediction and A.I. in Forensic Psychiatry and Criminal Justice: A Neurolaw Perspective
Advances in the use of neuroimaging in combination with A.I., and specifically the use of machine learning techniques, have led to the development of brain-reading technologies which, in the nearby future, could have many applications, such as lie detection, neuromarketing or brain-computer interfaces. Some of these could, in principle, also be used in forensic psychiatry. The application of these methods in forensic psychiatry could, for instance, be helpful to increase the accuracy of risk assessment and to identify possible interventions. This technique could be referred to as ‘A.I. neuroprediction,’ and involves identifying potential neurocognitive markers for the prediction of recidivism. However, the future implications of this technique and the role of neuroscience and A.I. in violence risk assessment remain to be established. In this paper, we review and analyze the literature concerning the use of brain-reading A.I. for neuroprediction of violence and rearrest to identify possibilities and challenges in the future use of these techniques in the fields of forensic psychiatry and criminal justice, considering legal implications and ethical issues. The analysis suggests that additional research is required on A.I. neuroprediction techniques, and there is still a great need to understand how they can be implemented in risk assessment in the field of forensic psychiatry. Besides the alluring potential of A.I. neuroprediction, we argue that its use in criminal justice and forensic psychiatry should be subjected to thorough harms/benefits analyses not only when these technologies will be fully available, but also while they are being researched and developed
A new tool for the evaluation of the rehabilitation outcomes in older persons. a machine learning model to predict functional status 1 year ahead
Purpose To date, the assessment of disability in older people is obtained utilizing a Comprehensive Geriatric Assessment (CGA). However, it is often difficult to understand which areas of CGA are most predictive of the disability. The aim of this study is to evaluate the possibility to early predict—1year ahead—the disability level of a patient using machine leaning models.
Methods Community-dwelling older people were enrolled in this study. CGA was made at baseline and at 1year follow-up. After collecting input/independent variables (i.e., age, gender, schooling followed, body mass index, information on smoking, polypharmacy, functional status, cognitive performance, depression, nutritional status), we performed two distinct Support Vector Machine models (SVMs) able to predict functional status 1year ahead. To validate the choice of the model, the results achieved with the SVMs were compared with the output produced by simple linear regression models.
Results 218 patients (mean age = 78.01; SD = 7.85; male = 39%) were recruited. The combination of the two SVMs is able to achieve a higher prediction accuracy (exceeding 80% instances correctly classified vs 67% instances correctly classified by the combination of the two linear regression models). Furthermore, SVMs are able to classify both the three categories,
self sufficiently, disability risk and disability, while linear regression model separates the population only in two groups (self-sufficiency and disability) without identifying the intermediate category (disability risk) which turns out to be the most critical one.
Conclusions The development of such a model can contribute to the early detection of patients at risk of self-sufficiency loss
Topological Performance Measures as Surrogates for Physical Flow Models for Risk and Vulnerability Analysis for Electric Power Systems
Critical infrastructure systems must be both robust and resilient in order to
ensure the functioning of society. To improve the performance of such systems,
we often use risk and vulnerability analysis to find and address system
weaknesses. A critical component of such analyses is the ability to accurately
determine the negative consequences of various types of failures in the system.
Numerous mathematical and simulation models exist which can be used to this
end. However, there are relatively few studies comparing the implications of
using different modeling approaches in the context of comprehensive risk
analysis of critical infrastructures. Thus in this paper, we suggest a
classification of these models, which span from simple topologically-oriented
models to advanced physical flow-based models. Here, we focus on electric power
systems and present a study aimed at understanding the tradeoffs between
simplicity and fidelity in models used in the context of risk analysis.
Specifically, the purpose of this paper is to compare performances measures
achieved with a spectrum of approaches typically used for risk and
vulnerability analysis of electric power systems and evaluate if more
simplified topological measures can be combined using statistical methods to be
used as a surrogate for physical flow models. The results of our work provide
guidance as to appropriate models or combination of models to use when
analyzing large-scale critical infrastructure systems, where simulation times
quickly become insurmountable when using more advanced models, severely
limiting the extent of analyses that can be performed
- …