11 research outputs found

    Automated estimation of total lung volume using chest radiographs and deep learning

    No full text
    BACKGROUND: Total lung volume is an important quantitative biomarker and is used for the assessment of restrictive lung diseases. PURPOSE: In this study, we investigate the performance of several deep-learning approaches for automated measurement of total lung volume from chest radiographs. METHODS: About 7621 posteroanterior and lateral view chest radiographs (CXR) were collected from patients with chest CT available. Similarly, 928 CXR studies were chosen from patients with pulmonary function test (PFT) results. The reference total lung volume was calculated from lung segmentation on CT or PFT data, respectively. This dataset was used to train deep-learning architectures to predict total lung volume from chest radiographs. The experiments were constructed in a stepwise fashion with increasing complexity to demonstrate the effect of training with CT-derived labels only and the sources of error. The optimal models were tested on 291 CXR studies with reference lung volume obtained from PFT. Mean absolute error (MAE), mean absolute percentage error (MAPE), and Pearson correlation coefficient (Pearson's r) were computed. RESULTS: The optimal deep-learning regression model showed an MAE of 408 ml and an MAPE of 8.1% using both frontal and lateral chest radiographs as input. The predictions were highly correlated with the reference standard (Pearson's r = 0.92). CT-derived labels were useful for pretraining but the optimal performance was obtained by fine-tuning the network with PFT-derived labels. CONCLUSION: We demonstrate, for the first time, that state-of-the-art deep-learning solutions can accurately measure total lung volume from plain chest radiographs. The proposed model is made publicly available and can be used to obtain total lung volume from routinely acquired chest radiographs at no additional cost. This deep-learning system can be a useful tool to identify trends over time in patients referred regularly for chest X-ray

    Knowledge distillation with ensembles of convolutional neural networks for medical image segmentation

    No full text
    Purpose: Ensembles of convolutional neural networks (CNNs) often outperform a single CNN in medical image segmentation tasks, but inference is computationally more expensive and makes ensembles unattractive for some applications. We compared the performance of differently constructed ensembles with the performance of CNNs derived from these ensembles using knowledge distillation, a technique for reducing the footprint of large models such as ensembles. Approach: We investigated two different types of ensembles, namely, diverse ensembles of networks with three different architectures and two different loss-functions, and uniform ensembles of networks with the same architecture but initialized with different random seeds. For each ensemble, additionally, a single student network was trained to mimic the class probabilities predicted by the teacher model, the ensemble. We evaluated the performance of each network, the ensembles, and the corresponding distilled networks across three different publicly available datasets. These included chest computed tomography scans with four annotated organs of interest, brain magnetic resonance imaging (MRI) with six annotated brain structures, and cardiac cine-MRI with three annotated heart structures. Results: Both uniform and diverse ensembles obtained better results than any of the individual networks in the ensemble. Furthermore, applying knowledge distillation resulted in a single network that was smaller and faster without compromising performance compared with the ensemble it learned from. The distilled networks significantly outperformed the same network trained with reference segmentation instead of knowledge distillation. Conclusion: Knowledge distillation can compress segmentation ensembles of uniform or diverse composition into a single CNN while maintaining the performance of the ensemble

    Knowledge distillation with ensembles of convolutional neural networks for medical image segmentation

    No full text
    Purpose: Ensembles of convolutional neural networks (CNNs) often outperform a single CNN in medical image segmentation tasks, but inference is computationally more expensive and makes ensembles unattractive for some applications. We compared the performance of differently constructed ensembles with the performance of CNNs derived from these ensembles using knowledge distillation, a technique for reducing the footprint of large models such as ensembles. Approach: We investigated two different types of ensembles, namely, diverse ensembles of networks with three different architectures and two different loss-functions, and uniform ensembles of networks with the same architecture but initialized with different random seeds. For each ensemble, additionally, a single student network was trained to mimic the class probabilities predicted by the teacher model, the ensemble. We evaluated the performance of each network, the ensembles, and the corresponding distilled networks across three different publicly available datasets. These included chest computed tomography scans with four annotated organs of interest, brain magnetic resonance imaging (MRI) with six annotated brain structures, and cardiac cine-MRI with three annotated heart structures. Results: Both uniform and diverse ensembles obtained better results than any of the individual networks in the ensemble. Furthermore, applying knowledge distillation resulted in a single network that was smaller and faster without compromising performance compared with the ensemble it learned from. The distilled networks significantly outperformed the same network trained with reference segmentation instead of knowledge distillation. Conclusion: Knowledge distillation can compress segmentation ensembles of uniform or diverse composition into a single CNN while maintaining the performance of the ensemble

    Removal of Cr(VI) from aqueous solution by a highly efficient chelating resin

    No full text
    WOS: 000400971300006The poly([(2-methacryloyloxy)ethyl]trimethylammonium chloride) [P(MOTA)] based chelating resin was synthesized by radical polymerization and employed for Cr(VI) removal. The sorption capacity of this resin was very high with a fast sorption rate for Cr(VI) obeying a pseudo-second order kinetic model. In agreement to diffusion model equations, the rate determining step was film diffusion according to the infinite solution volume (ISV) model and reacted layer in accordance with the unreacted core (UC) model. In a column-mode sorption study, the breakthrough capacity obtained was 24.3 mg Cr/mL-resin. The elution of Cr(VI) from the resin was achieved using a mixture of 1.0 mol/L NaOH and 1.0 mol/L NaCl with an elution efficiency of about 100 %. Based on FT-IR measurements, it was clearly understood that Cr(VI) was sorbed by the resin through the quaternary amine functional groups.CHILTURPOL2 (PIRSESGA Project) [269153]; FONDECYTComision Nacional de Investigacion Cientifica y Tecnologica (CONICYT)CONICYT FONDECYT [1150510]; REDOC (MINEDUC Project at University of Concepcion) [UCO1202]The authors thank the so-called CHILTURPOL2 (PIRSESGA-2009 Project, Grant Number 269153) 7FP-MC Actions Grant. We also thank FONDECYT (Grant No. 1150510), REDOC (MINEDUC Project UCO1202 at University of Concepcion) for the financial support
    corecore