97,130 research outputs found
How to use the Kohonen algorithm to simultaneously analyse individuals in a survey
The Kohonen algorithm (SOM, Kohonen,1984, 1995) is a very powerful tool for
data analysis. It was originally designed to model organized connections
between some biological neural networks. It was also immediately considered as
a very good algorithm to realize vectorial quantization, and at the same time
pertinent classification, with nice properties for visualization. If the
individuals are described by quantitative variables (ratios, frequencies,
measurements, amounts, etc.), the straightforward application of the original
algorithm leads to build code vectors and to associate to each of them the
class of all the individuals which are more similar to this code-vector than to
the others. But, in case of individuals described by categorical (qualitative)
variables having a finite number of modalities (like in a survey), it is
necessary to define a specific algorithm. In this paper, we present a new
algorithm inspired by the SOM algorithm, which provides a simultaneous
classification of the individuals and of their modalities.Comment: Special issue ESANN 0
MildInt: Deep Learning-Based Multimodal Longitudinal Data Integration Framework
As large amounts of heterogeneous biomedical data become available, numerous methods for integrating such datasets have been developed to extract complementary knowledge from multiple domains of sources. Recently, a deep learning approach has shown promising results in a variety of research areas. However, applying the deep learning approach requires expertise for constructing a deep architecture that can take multimodal longitudinal data. Thus, in this paper, a deep learning-based python package for data integration is developed. The python package deep learning-based multimodal longitudinal data integration framework (MildInt) provides the preconstructed deep learning architecture for a classification task. MildInt contains two learning phases: learning feature representation from each modality of data and training a classifier for the final decision. Adopting deep architecture in the first phase leads to learning more task-relevant feature representation than a linear model. In the second phase, linear regression classifier is used for detecting and investigating biomarkers from multimodal data. Thus, by combining the linear model and the deep learning model, higher accuracy and better interpretability can be achieved. We validated the performance of our package using simulation data and real data. For the real data, as a pilot study, we used clinical and multimodal neuroimaging datasets in Alzheimer's disease to predict the disease progression. MildInt is capable of integrating multiple forms of numerical data including time series and non-time series data for extracting complementary features from the multimodal dataset
ModDrop: adaptive multi-modal gesture recognition
We present a method for gesture detection and localisation based on
multi-scale and multi-modal deep learning. Each visual modality captures
spatial information at a particular spatial scale (such as motion of the upper
body or a hand), and the whole system operates at three temporal scales. Key to
our technique is a training strategy which exploits: i) careful initialization
of individual modalities; and ii) gradual fusion involving random dropping of
separate channels (dubbed ModDrop) for learning cross-modality correlations
while preserving uniqueness of each modality-specific representation. We
present experiments on the ChaLearn 2014 Looking at People Challenge gesture
recognition track, in which we placed first out of 17 teams. Fusing multiple
modalities at several spatial and temporal scales leads to a significant
increase in recognition rates, allowing the model to compensate for errors of
the individual classifiers as well as noise in the separate channels.
Futhermore, the proposed ModDrop training technique ensures robustness of the
classifier to missing signals in one or several channels to produce meaningful
predictions from any number of available modalities. In addition, we
demonstrate the applicability of the proposed fusion scheme to modalities of
arbitrary nature by experiments on the same dataset augmented with audio.Comment: 14 pages, 7 figure
- …