11,203 research outputs found
MEG Decoding Across Subjects
Brain decoding is a data analysis paradigm for neuroimaging experiments that
is based on predicting the stimulus presented to the subject from the
concurrent brain activity. In order to make inference at the group level, a
straightforward but sometimes unsuccessful approach is to train a classifier on
the trials of a group of subjects and then to test it on unseen trials from new
subjects. The extreme difficulty is related to the structural and functional
variability across the subjects. We call this approach "decoding across
subjects". In this work, we address the problem of decoding across subjects for
magnetoencephalographic (MEG) experiments and we provide the following
contributions: first, we formally describe the problem and show that it belongs
to a machine learning sub-field called transductive transfer learning (TTL).
Second, we propose to use a simple TTL technique that accounts for the
differences between train data and test data. Third, we propose the use of
ensemble learning, and specifically of stacked generalization, to address the
variability across subjects within train data, with the aim of producing more
stable classifiers. On a face vs. scramble task MEG dataset of 16 subjects, we
compare the standard approach of not modelling the differences across subjects,
to the proposed one of combining TTL and ensemble learning. We show that the
proposed approach is consistently more accurate than the standard one
Stacked Denoising Autoencoders and Transfer Learning for Immunogold Particles Detection and Recognition
In this paper we present a system for the detection of immunogold particles
and a Transfer Learning (TL) framework for the recognition of these immunogold
particles. Immunogold particles are part of a high-magnification method for the
selective localization of biological molecules at the subcellular level only
visible through Electron Microscopy. The number of immunogold particles in the
cell walls allows the assessment of the differences in their compositions
providing a tool to analise the quality of different plants. For its
quantization one requires a laborious manual labeling (or annotation) of images
containing hundreds of particles. The system that is proposed in this paper can
leverage significantly the burden of this manual task.
For particle detection we use a LoG filter coupled with a SDA. In order to
improve the recognition, we also study the applicability of TL settings for
immunogold recognition. TL reuses the learning model of a source problem on
other datasets (target problems) containing particles of different sizes. The
proposed system was developed to solve a particular problem on maize cells,
namely to determine the composition of cell wall ingrowths in endosperm
transfer cells. This novel dataset as well as the code for reproducing our
experiments is made publicly available.
We determined that the LoG detector alone attained more than 84\% of accuracy
with the F-measure. Developing immunogold recognition with TL also provided
superior performance when compared with the baseline models augmenting the
accuracy rates by 10\%
AffinityNet: semi-supervised few-shot learning for disease type prediction
While deep learning has achieved great success in computer vision and many
other fields, currently it does not work very well on patient genomic data with
the "big p, small N" problem (i.e., a relatively small number of samples with
high-dimensional features). In order to make deep learning work with a small
amount of training data, we have to design new models that facilitate few-shot
learning. Here we present the Affinity Network Model (AffinityNet), a data
efficient deep learning model that can learn from a limited number of training
examples and generalize well. The backbone of the AffinityNet model consists of
stacked k-Nearest-Neighbor (kNN) attention pooling layers. The kNN attention
pooling layer is a generalization of the Graph Attention Model (GAM), and can
be applied to not only graphs but also any set of objects regardless of whether
a graph is given or not. As a new deep learning module, kNN attention pooling
layers can be plugged into any neural network model just like convolutional
layers. As a simple special case of kNN attention pooling layer, feature
attention layer can directly select important features that are useful for
classification tasks. Experiments on both synthetic data and cancer genomic
data from TCGA projects show that our AffinityNet model has better
generalization power than conventional neural network models with little
training data. The code is freely available at
https://github.com/BeautyOfWeb/AffinityNet .Comment: 14 pages, 6 figure
- …