2,802 research outputs found
Multimodal Machine Learning-based Knee Osteoarthritis Progression Prediction from Plain Radiographs and Clinical Data
Knee osteoarthritis (OA) is the most common musculoskeletal disease without a
cure, and current treatment options are limited to symptomatic relief.
Prediction of OA progression is a very challenging and timely issue, and it
could, if resolved, accelerate the disease modifying drug development and
ultimately help to prevent millions of total joint replacement surgeries
performed annually. Here, we present a multi-modal machine learning-based OA
progression prediction model that utilizes raw radiographic data, clinical
examination results and previous medical history of the patient. We validated
this approach on an independent test set of 3,918 knee images from 2,129
subjects. Our method yielded area under the ROC curve (AUC) of 0.79 (0.78-0.81)
and Average Precision (AP) of 0.68 (0.66-0.70). In contrast, a reference
approach, based on logistic regression, yielded AUC of 0.75 (0.74-0.77) and AP
of 0.62 (0.60-0.64). The proposed method could significantly improve the
subject selection process for OA drug-development trials and help the
development of personalized therapeutic plans
Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data
We analyse multimodal time-series data corresponding to weight, sleep and
steps measurements. We focus on predicting whether a user will successfully
achieve his/her weight objective. For this, we design several deep long
short-term memory (LSTM) architectures, including a novel cross-modal LSTM
(X-LSTM), and demonstrate their superiority over baseline approaches. The
X-LSTM improves parameter efficiency by processing each modality separately and
allowing for information flow between them by way of recurrent
cross-connections. We present a general hyperparameter optimisation technique
for X-LSTMs, which allows us to significantly improve on the LSTM and a prior
state-of-the-art cross-modal approach, using a comparable number of parameters.
Finally, we visualise the model's predictions, revealing implications about
latent variables in this task.Comment: To appear in NIPS ML4H 2017 and NIPS TSW 201
Polarimetric Thermal to Visible Face Verification via Self-Attention Guided Synthesis
Polarimetric thermal to visible face verification entails matching two images
that contain significant domain differences. Several recent approaches have
attempted to synthesize visible faces from thermal images for cross-modal
matching. In this paper, we take a different approach in which rather than
focusing only on synthesizing visible faces from thermal faces, we also propose
to synthesize thermal faces from visible faces. Our intuition is based on the
fact that thermal images also contain some discriminative information about the
person for verification. Deep features from a pre-trained Convolutional Neural
Network (CNN) are extracted from the original as well as the synthesized
images. These features are then fused to generate a template which is then used
for verification. The proposed synthesis network is based on the self-attention
generative adversarial network (SAGAN) which essentially allows efficient
attention-guided image synthesis. Extensive experiments on the ARL polarimetric
thermal face dataset demonstrate that the proposed method achieves
state-of-the-art performance.Comment: This work is accepted at the 12th IAPR International Conference On
Biometrics (ICB 2019
- …