17,781 research outputs found
Latent Patient Network Learning for Automatic Diagnosis
Recently, Graph Convolutional Networks (GCNs) has proven to be a powerful
machine learning tool for Computer Aided Diagnosis (CADx) and disease
prediction. A key component in these models is to build a population graph,
where the graph adjacency matrix represents pair-wise patient similarities.
Until now, the similarity metrics have been defined manually, usually based on
meta-features like demographics or clinical scores. The definition of the
metric, however, needs careful tuning, as GCNs are very sensitive to the graph
structure. In this paper, we demonstrate for the first time in the CADx domain
that it is possible to learn a single, optimal graph towards the GCN's
downstream task of disease classification. To this end, we propose a novel,
end-to-end trainable graph learning architecture for dynamic and localized
graph pruning. Unlike commonly employed spectral GCN approaches, our GCN is
spatial and inductive, and can thus infer previously unseen patients as well.
We demonstrate significant classification improvements with our learned graph
on two CADx problems in medicine. We further explain and visualize this result
using an artificial dataset, underlining the importance of graph learning for
more accurate and robust inference with GCNs in medical applications
On the Generation of Medical Question-Answer Pairs
Question answering (QA) has achieved promising progress recently. However,
answering a question in real-world scenarios like the medical domain is still
challenging, due to the requirement of external knowledge and the insufficient
quantity of high-quality training data. In the light of these challenges, we
study the task of generating medical QA pairs in this paper. With the insight
that each medical question can be considered as a sample from the latent
distribution of questions given answers, we propose an automated medical QA
pair generation framework, consisting of an unsupervised key phrase detector
that explores unstructured material for validity, and a generator that involves
a multi-pass decoder to integrate structural knowledge for diversity. A series
of experiments have been conducted on a real-world dataset collected from the
National Medical Licensing Examination of China. Both automatic evaluation and
human annotation demonstrate the effectiveness of the proposed method. Further
investigation shows that, by incorporating the generated QA pairs for training,
significant improvement in terms of accuracy can be achieved for the
examination QA system.Comment: AAAI 202
Deep learning cardiac motion analysis for human survival prediction
Motion analysis is used in computer vision to understand the behaviour of
moving objects in sequences of images. Optimising the interpretation of dynamic
biological systems requires accurate and precise motion tracking as well as
efficient representations of high-dimensional motion trajectories so that these
can be used for prediction tasks. Here we use image sequences of the heart,
acquired using cardiac magnetic resonance imaging, to create time-resolved
three-dimensional segmentations using a fully convolutional network trained on
anatomical shape priors. This dense motion model formed the input to a
supervised denoising autoencoder (4Dsurvival), which is a hybrid network
consisting of an autoencoder that learns a task-specific latent code
representation trained on observed outcome data, yielding a latent
representation optimised for survival prediction. To handle right-censored
survival outcomes, our network used a Cox partial likelihood loss function. In
a study of 302 patients the predictive accuracy (quantified by Harrell's
C-index) was significantly higher (p < .0001) for our model C=0.73 (95 CI:
0.68 - 0.78) than the human benchmark of C=0.59 (95 CI: 0.53 - 0.65). This
work demonstrates how a complex computer vision task using high-dimensional
medical image data can efficiently predict human survival
Informative sample generation using class aware generative adversarial networks for classification of chest Xrays
Training robust deep learning (DL) systems for disease detection from medical
images is challenging due to limited images covering different disease types
and severity. The problem is especially acute, where there is a severe class
imbalance. We propose an active learning (AL) framework to select most
informative samples for training our model using a Bayesian neural network.
Informative samples are then used within a novel class aware generative
adversarial network (CAGAN) to generate realistic chest xray images for data
augmentation by transferring characteristics from one class label to another.
Experiments show our proposed AL framework is able to achieve state-of-the-art
performance by using about of the full dataset, thus saving significant
time and effort over conventional methods
- …