460 research outputs found
Spatio-Temporal Facial Expression Recognition Using Convolutional Neural Networks and Conditional Random Fields
Automated Facial Expression Recognition (FER) has been a challenging task for
decades. Many of the existing works use hand-crafted features such as LBP, HOG,
LPQ, and Histogram of Optical Flow (HOF) combined with classifiers such as
Support Vector Machines for expression recognition. These methods often require
rigorous hyperparameter tuning to achieve good results. Recently Deep Neural
Networks (DNN) have shown to outperform traditional methods in visual object
recognition. In this paper, we propose a two-part network consisting of a
DNN-based architecture followed by a Conditional Random Field (CRF) module for
facial expression recognition in videos. The first part captures the spatial
relation within facial images using convolutional layers followed by three
Inception-ResNet modules and two fully-connected layers. To capture the
temporal relation between the image frames, we use linear chain CRF in the
second part of our network. We evaluate our proposed network on three publicly
available databases, viz. CK+, MMI, and FERA. Experiments are performed in
subject-independent and cross-database manners. Our experimental results show
that cascading the deep network architecture with the CRF module considerably
increases the recognition of facial expressions in videos and in particular it
outperforms the state-of-the-art methods in the cross-database experiments and
yields comparable results in the subject-independent experiments.Comment: To appear in 12th IEEE Conference on Automatic Face and Gesture
Recognition Worksho
Adversarial Learning on Incomplete and Imbalanced Medical Data for Robust Survival Prediction of Liver Transplant Patients
The scarcity of liver transplants necessitates prioritizing patients based on their health condition to minimize deaths on the waiting list. Recently, machine learning methods have gained popularity for automatizing liver transplant allocation systems, which enables prompt and suitable selection of recipients. Nevertheless, raw medical data often contain complexities such as missing values and class imbalance that reduce the reliability of the constructed model. This paper aims at eliminating the respective challenges to ensure the reliability of the decision-making process. To this aim, we first propose a novel deep learning method to simultaneously eliminate these challenges and predict the patients\u27 survival chance. Secondly, a hybrid framework is designed that contains three main modules for missing data imputation, class imbalance learning, and classification, each of which employing multiple advanced techniques for the given task. Furthermore, these two approaches are compared and evaluated using a real clinical case study. The experimental results indicate the robust and superior performance of the proposed deep learning method in terms of F-measure and area under the receiver operating characteristic curve (AUC)
Adversarial Learning on Incomplete and Imbalanced Medical Data for Robust Survival Prediction of Liver Transplant Patients
The scarcity of liver transplants necessitates prioritizing patients based on their health condition to minimize deaths on the waiting list. Recently, machine learning methods have gained popularity for automatizing liver transplant allocation systems, which enables prompt and suitable selection of recipients. Nevertheless, raw medical data often contain complexities such as missing values and class imbalance that reduce the reliability of the constructed model. This paper aims at eliminating the respective challenges to ensure the reliability of the decision-making process. To this aim, we first propose a novel deep learning method to simultaneously eliminate these challenges and predict the patients\u27 survival chance. Secondly, a hybrid framework is designed that contains three main modules for missing data imputation, class imbalance learning, and classification, each of which employing multiple advanced techniques for the given task. Furthermore, these two approaches are compared and evaluated using a real clinical case study. The experimental results indicate the robust and superior performance of the proposed deep learning method in terms of F-measure and area under the receiver operating characteristic curve (AUC)
Modeling Lyman- spectra of the MUSE-Wide survey
We compare Lyman- (Ly) spectra of the "MUSE-Wide
survey" (Herenz et al. 2017) to a suite of radiative transfer simulations
consisting of a central luminous source within a concentric, moving shell of
neutral gas, and dust. This six parameter shell-model has been used numerously
in previous studies, however, on significantly smaller data-sets. We find that
the shell-model can reproduce the observed spectral shape very well - better
than the also common `Gaussian-minus-Gaussian' model which we also fitted to
the dataset. Specifically, we find that of the fits possess a
goodness-of-fit value of . The large number of spectra allows us
to robustly characterize the shell-model parameter range, and consequently, the
spectral shapes typical for realistic spectra. We find that the vast majority
of the Ly spectral shapes require an outflow and only are
well-fitted through an inflowing shell. In addition, we find of the
spectra to be consistent with a neutral hydrogen column density
- suggestive of a non-negligible fraction of
continuum leakers in the MUSE-Wide sample. Furthermore, we correlate the
spectral against the Ly halo properties against each other but do not
find any strong correlation.Comment: 10 pages, 7 figures; data can be downloaded at
http://bit.ly/a-spectra-of-M
Data mining and accelerated electronic structure theory as a tool in the search for new functional materials
Data mining is a recognized predictive tool in a variety of areas ranging
from bioinformatics and drug design to crystal structure prediction. In the
present study, an electronic structure implementation has been combined with
structural data from the Inorganic Crystal Structure Database to generate
results for highly accelerated electronic structure calculations of about
22,000 inorganic compounds. It is shown how data mining algorithms employed on
the database can identify new functional materials with desired materials
properties, resulting in a prediction of 136 novel materials with potential for
use as detector materials for ionizing radiation. The methodology behind the
automatized ab-initio approach is presented, results are tabulated and a
version of the complete database is made available at the internet web site
http://gurka.fysik.uu.se/ESP/ (Ref.1).Comment: Project homepage: http://gurka.fysik.uu.se/ESP
Reciprocity Calibration for Massive MIMO: Proposal, Modeling and Validation
This paper presents a mutual coupling based calibration method for
time-division-duplex massive MIMO systems, which enables downlink precoding
based on uplink channel estimates. The entire calibration procedure is carried
out solely at the base station (BS) side by sounding all BS antenna pairs. An
Expectation-Maximization (EM) algorithm is derived, which processes the
measured channels in order to estimate calibration coefficients. The EM
algorithm outperforms current state-of-the-art narrow-band calibration schemes
in a mean squared error (MSE) and sum-rate capacity sense. Like its
predecessors, the EM algorithm is general in the sense that it is not only
suitable to calibrate a co-located massive MIMO BS, but also very suitable for
calibrating multiple BSs in distributed MIMO systems.
The proposed method is validated with experimental evidence obtained from a
massive MIMO testbed. In addition, we address the estimated narrow-band
calibration coefficients as a stochastic process across frequency, and study
the subspace of this process based on measurement data. With the insights of
this study, we propose an estimator which exploits the structure of the process
in order to reduce the calibration error across frequency. A model for the
calibration error is also proposed based on the asymptotic properties of the
estimator, and is validated with measurement results.Comment: Submitted to IEEE Transactions on Wireless Communications,
21/Feb/201
A Process to Implement an Artificial Neural Network and Association Rules Techniques to Improve Asset Performance and Energy Efficiency
In this paper, we address the problem of asset performance monitoring, with the intention
of both detecting any potential reliability problem and predicting any loss of energy consumption
e ciency. This is an important concern for many industries and utilities with very intensive
capitalization in very long-lasting assets. To overcome this problem, in this paper we propose an
approach to combine an Artificial Neural Network (ANN) with Data Mining (DM) tools, specifically
with Association Rule (AR) Mining. The combination of these two techniques can now be done
using software which can handle large volumes of data (big data), but the process still needs to
ensure that the required amount of data will be available during the assets’ life cycle and that its
quality is acceptable. The combination of these two techniques in the proposed sequence di ers
from previous works found in the literature, giving researchers new options to face the problem.
Practical implementation of the proposed approach may lead to novel predictive maintenance models
(emerging predictive analytics) that may detect with unprecedented precision any asset’s lack of
performance and help manage assets’ O&M accordingly. The approach is illustrated using specific
examples where asset performance monitoring is rather complex under normal operational conditions.Ministerio de Economía y Competitividad DPI2015-70842-
Use of A Network Enabled Server System for a Sparse Linear Algebra Grid Application
Solving systems of linear equations is one of the key operations in linear algebra. Many different algorithms are available in that purpose. These algorithms require a very accurate tuning to minimise runtime and memory consumption. The TLSE project provides, on one hand, a scenario-driven expert site to help users choose the right algorithm according to their problem and tune accurately this algorithm, and, on the other hand, a test-bed for experts in order to compare algorithms and define scenarios for the expert site. Both features require to run the available solvers a large number of times with many different values for the control parameters (and maybe with many different architectures). Currently, only the grid can provide enough computing power for this kind of application. The DIET middleware is the GRID backbone for TLSE. It manages the solver services and their scheduling in a scalable way.La résolution de systèmes linéaires creux est une opération clé en algèbre linéaire. Beaucoup d’algorithmes sont utilisés pour cela, qui dépendent de nombreux paramètres, afin d’offrir une robustesse, une performance et une consommation mémoire optimales. Le projet GRID-TLSE fournit d’une part, un site d’expertise basé sur l’utilisation de scénarios pour aider les utilisateurs à choisir l’algorithme qui convient le mieux à leur problème ainsi que les paramètres associés; et d’autre part, un environnement pour les experts du domaine leur permettant de comparer efficacement des algorithmes et de définir dynamiquement de nouveaux scénarios d’utilisation. Ces fonctionnalités nécessitent de pouvoir exécuter les logiciels de résolution disponibles un grand nombre de fois,avec beaucoup de valeurs différentes des paramètres de contrôle (et éventuellement sur plusieurs architectures de machines). Actuellement, seule la grille peut fournir la puissance de calcul pour ce type d’applications. L’intergiciel DIETest utilisé pour gérer la grille, les différents services, et leur ordonnancement efficace
- …