18,149 research outputs found
Sparse Predictive Structure of Deconvolved Functional Brain Networks
The functional and structural representation of the brain as a complex
network is marked by the fact that the comparison of noisy and intrinsically
correlated high-dimensional structures between experimental conditions or
groups shuns typical mass univariate methods. Furthermore most network
estimation methods cannot distinguish between real and spurious correlation
arising from the convolution due to nodes' interaction, which thus introduces
additional noise in the data. We propose a machine learning pipeline aimed at
identifying multivariate differences between brain networks associated to
different experimental conditions. The pipeline (1) leverages the deconvolved
individual contribution of each edge and (2) maps the task into a sparse
classification problem in order to construct the associated "sparse deconvolved
predictive network", i.e., a graph with the same nodes of those compared but
whose edge weights are defined by their relevance for out of sample predictions
in classification. We present an application of the proposed method by decoding
the covert attention direction (left or right) based on the single-trial
functional connectivity matrix extracted from high-frequency
magnetoencephalography (MEG) data. Our results demonstrate how network
deconvolution matched with sparse classification methods outperforms typical
approaches for MEG decoding
Identification of functionally related enzymes by learning-to-rank methods
Enzyme sequences and structures are routinely used in the biological sciences
as queries to search for functionally related enzymes in online databases. To
this end, one usually departs from some notion of similarity, comparing two
enzymes by looking for correspondences in their sequences, structures or
surfaces. For a given query, the search operation results in a ranking of the
enzymes in the database, from very similar to dissimilar enzymes, while
information about the biological function of annotated database enzymes is
ignored.
In this work we show that rankings of that kind can be substantially improved
by applying kernel-based learning algorithms. This approach enables the
detection of statistical dependencies between similarities of the active cleft
and the biological function of annotated enzymes. This is in contrast to
search-based approaches, which do not take annotated training data into
account. Similarity measures based on the active cleft are known to outperform
sequence-based or structure-based measures under certain conditions. We
consider the Enzyme Commission (EC) classification hierarchy for obtaining
annotated enzymes during the training phase. The results of a set of sizeable
experiments indicate a consistent and significant improvement for a set of
similarity measures that exploit information about small cavities in the
surface of enzymes
Home detection of freezing of gait using Support Vector Machines through a single waist-worn triaxial accelerometer
Among Parkinson’s disease (PD) symptoms, freezing of gait (FoG) is one of the most debilitating. To assess FoG, current clinical practice mostly employs repeated evaluations over weeks and months based on questionnaires, which may not accurately map the severity of this symptom. The use of a non-invasive system to monitor the activities of daily living (ADL) and the PD symptoms experienced by patients throughout the day could provide a more accurate and objective evaluation of FoG in order to better understand the evolution of the disease and allow for a more informed decision-making process in making adjustments to the patient’s treatment plan. This paper presents a new algorithm to detect FoG with a machine learning approach based on Support Vector Machines (SVM) and a single tri-axial accelerometer worn at the waist. The method is evaluated through the acceleration signals in an outpatient setting gathered from 21 PD patients at their home and evaluated under two different conditions: first, a generic model is tested by using a leave-one-out approach and, second, a personalised model that also uses part of the dataset from each patient. Results show a significant improvement in the accuracy of the personalised model compared to the generic model, showing enhancement in the specificity and sensitivity geometric mean (GM) of 7.2%. Furthermore, the SVM approach adopted has been compared to the most comprehensive FoG detection method currently in use (referred to as MBFA in this paper). Results of our novel generic method provide an enhancement of 11.2% in the GM compared to the MBFA generic model and, in the case of the personalised model, a 10% of improvement with respect to the MBFA personalised model. Thus, our results show that a machine learning approach can be used to monitor FoG during the daily life of PD patients and, furthermore, personalised models for FoG detection can be used to improve monitoring accuracy.Peer ReviewedPostprint (published version
Inside the brain of an elite athlete: The neural processes that support high achievement in sports
Events like the World Championships in athletics and the Olympic Games raise the public profile of competitive sports. They may also leave us wondering what sets the competitors in these events apart from those of us who simply watch. Here we attempt to link neural and cognitive processes that have been found to be important for elite performance with computational and physiological theories inspired by much simpler laboratory tasks. In this way we hope to inspire neuroscientists to consider how their basic research might help to explain sporting skill at the highest levels of performance
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
Classification of Spatio-Temporal fMRI Data in the Spiking Neural Network
Deep learning machine that employs Spiking Neural Network (SNN) is currently one of the main techniques in computational intelligence to discover knowledge from various fields. It has been applied in many application areas include health, engineering, finances, environment, and others. This paper addresses a classification problem based on a functional Magnetic Resonance Image (fMRI) brain data experiment involving a subject who reads a sentence or looks at a picture.  In the experiment, Signal to Noise Ratio (SNR) is used to select the most relevant features (voxels) before they were propagated in an SNN-based learning architecture. The spatiotemporal relationships between Spatio Temporal Brain Data (STBD) are learned and classified accordingly. All the brain regions are taken from data with label star plus-04847-v7.mat. The overall results of this experiment show that the SNR method helps to get the most relevant features from the data to produced higher accuracy for Reading a Sentence instead of Looking a Picture.
An Adaptive Design Methodology for Reduction of Product Development Risk
Embedded systems interaction with environment inherently complicates
understanding of requirements and their correct implementation. However,
product uncertainty is highest during early stages of development. Design
verification is an essential step in the development of any system, especially
for Embedded System. This paper introduces a novel adaptive design methodology,
which incorporates step-wise prototyping and verification. With each adaptive
step product-realization level is enhanced while decreasing the level of
product uncertainty, thereby reducing the overall costs. The back-bone of this
frame-work is the development of Domain Specific Operational (DOP) Model and
the associated Verification Instrumentation for Test and Evaluation, developed
based on the DOP model. Together they generate functionally valid test-sequence
for carrying out prototype evaluation. With the help of a case study 'Multimode
Detection Subsystem' the application of this method is sketched. The design
methodologies can be compared by defining and computing a generic performance
criterion like Average design-cycle Risk. For the case study, by computing
Average design-cycle Risk, it is shown that the adaptive method reduces the
product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure
- …