2 research outputs found
Internal-transfer Weighting of Multi-task Learning for Lung Cancer Detection
Recently, multi-task networks have shown to both offer additional estimation
capabilities, and, perhaps more importantly, increased performance over
single-task networks on a "main/primary" task. However, balancing the
optimization criteria of multi-task networks across different tasks is an area
of active exploration. Here, we extend a previously proposed 3D attention-based
network with four additional multi-task subnetworks for the detection of lung
cancer and four auxiliary tasks (diagnosis of asthma, chronic bronchitis,
chronic obstructive pulmonary disease, and emphysema). We introduce and
evaluate a learning policy, Periodic Focusing Learning Policy (PFLP), that
alternates the dominance of tasks throughout the training. To improve
performance on the primary task, we propose an Internal-Transfer Weighting
(ITW) strategy to suppress the loss functions on auxiliary tasks for the final
stages of training. To evaluate this approach, we examined 3386 patients
(single scan per patient) from the National Lung Screening Trial (NLST) and
de-identified data from the Vanderbilt Lung Screening Program, with a
2517/277/592 (scans) split for training, validation, and testing. Baseline
networks include a single-task strategy and a multi-task strategy without
adaptive weights (PFLP/ITW), while primary experiments are multi-task trials
with either PFLP or ITW or both. On the test set for lung cancer prediction,
the baseline single-task network achieved prediction AUC of 0.8080 and the
multi-task baseline failed to converge (AUC 0.6720). However, applying PFLP
helped multi-task network clarify and achieved test set lung cancer prediction
AUC of 0.8402. Furthermore, our ITW technique boosted the PFLP enabled
multi-task network and achieved an AUC of 0.8462 (McNemar test, p < 0.01).Comment: Accepted by Medical Imaging, SPIE202
Deep Multi-path Network Integrating Incomplete Biomarker and Chest CT Data for Evaluating Lung Cancer Risk
Clinical data elements (CDEs) (e.g., age, smoking history), blood markers and
chest computed tomography (CT) structural features have been regarded as
effective means for assessing lung cancer risk. These independent variables can
provide complementary information and we hypothesize that combining them will
improve the prediction accuracy. In practice, not all patients have all these
variables available. In this paper, we propose a new network design, termed as
multi-path multi-modal missing network (M3Net), to integrate the multi-modal
data (i.e., CDEs, biomarker and CT image) considering missing modality with
multiple paths neural network. Each path learns discriminative features of one
modality, and different modalities are fused in a second stage for an
integrated prediction. The network can be trained end-to-end with both medical
image features and CDEs/biomarkers, or make a prediction with single modality.
We evaluate M3Net with datasets including three sites from the Consortium for
Molecular and Cellular Characterization of Screen-Detected Lesions (MCL)
project. Our method is cross validated within a cohort of 1291 subjects (383
subjects with complete CDEs/biomarkers and CT images), and externally validated
with a cohort of 99 subjects (99 with complete CDEs/biomarkers and CT images).
Both cross-validation and external-validation results show that combining
multiple modality significantly improves the predicting performance of single
modality. The results suggest that integrating subjects with missing either
CDEs/biomarker or CT imaging features can contribute to the discriminatory
power of our model (p < 0.05, bootstrap two-tailed test). In summary, the
proposed M3Net framework provides an effective way to integrate image and
non-image data in the context of missing information.Comment: RFW all-conference best paper finalist, SPIE2021 Medical Imagin