7 research outputs found

    Monitoring Achilles tendon healing progress in ultrasound imaging with convolutional neural networks

    Full text link
    Achilles tendon rupture is a debilitating injury, which is typically treated with surgical repair and long-term rehabilitation. The recovery, however, is protracted and often incomplete. Diagnosis, as well as healing progress assessment, are largely based on ultrasound and magnetic resonance imaging. In this paper, we propose an automatic method based on deep learning for analysis of Achilles tendon condition and estimation of its healing progress on ultrasound images. We develop custom convolutional neural networks for classification and regression on healing score and feature extraction. Our models are trained and validated on an acquired dataset of over 250.000 sagittal and over 450.000 axial ultrasound slices. The obtained estimates show a high correlation with the assessment of expert radiologists, with respect to all key parameters describing healing progress. We also observe that parameters associated with i.a. intratendinous healing processes are better modeled with sagittal slices. We prove that ultrasound imaging is quantitatively useful for clinical assessment of Achilles tendon healing process and should be viewed as complementary to magnetic resonance imaging.Comment: Paper accepted to MICCAI'19 SUSI worksho

    AutoRadiomics: a framework for reproducible radiomics research

    No full text
    Purpose Machine learning based on radiomics features has seen huge success in a variety of clinical applications. However, the need for standardization and reproducibility has been increasingly recognized as a necessary step for future clinical translation. We developed a novel, intuitive open-source framework to facilitate all data analysis steps of a radiomics workflow in an easy and reproducible manner and evaluated it by reproducing classification results in eight available open-source datasets from different clinical entities. Methods The framework performs image preprocessing, feature extraction, feature selection, modeling, and model evaluation, and can automatically choose the optimal parameters for a given task. All analysis steps can be reproduced with a web application, which offers an interactive user interface and does not require programming skills. We evaluated our method in seven different clinical applications using eight public datasets: six datasets from the recently published WORC database, and two prostate MRI datasets—Prostate MRI and Ultrasound With Pathology and Coordinates of Tracked Biopsy (Prostate-UCLA) and PROSTATEx. Results In the analyzed datasets, AutoRadiomics successfully created and optimized models using radiomics features. For WORC datasets, we achieved AUCs ranging from 0.56 for lung melanoma metastases detection to 0.93 for liposarcoma detection and thereby managed to replicate the previously reported results. No significant overfitting between training and test sets was observed. For the prostate cancer detection task, results were better in the PROSTATEx dataset (AUC = 0.73 for prostate and 0.72 for lesion mask) than in the Prostate-UCLA dataset (AUC 0.61 for prostate and 0.65 for lesion mask), with external validation results varying from AUC = 0.51 to AUC = 0.77. Conclusion AutoRadiomics is a robust tool for radiomic studies, which can be used as a comprehensive solution, one of the analysis steps, or an exploratory tool. Its wide applicability was confirmed by the results obtained in the diverse analyzed datasets. The framework, as well as code for this analysis, are publicly available under https://github.com/pwoznicki/AutoRadiomics

    Addressing challenges in radiomics research: systematic review and repository of open-access cancer imaging datasets

    No full text
    Objectives Open-access cancer imaging datasets have become integral for evaluating novel AI approaches in radiology. However, their use in quantitative analysis with radiomics features presents unique challenges, such as incomplete documentation, low visibility, non-uniform data formats, data inhomogeneity, and complex preprocessing. These issues may cause problems with reproducibility and standardization in radiomics studies. Methods We systematically reviewed imaging datasets with public copyright licenses, published up to March 2023 across four large online cancer imaging archives. We included only datasets with tomographic images (CT, MRI, or PET), segmentations, and clinical annotations, specifically identifying those suitable for radiomics research. Reproducible preprocessing and feature extraction were performed for each dataset to enable their easy reuse. Results We discovered 29 datasets with corresponding segmentations and labels in the form of health outcomes, tumor pathology, staging, imaging-based scores, genetic markers, or repeated imaging. We compiled a repository encompassing 10,354 patients and 49,515 scans. Of the 29 datasets, 15 were licensed under Creative Commons licenses, allowing both non-commercial and commercial usage and redistribution, while others featured custom or restricted licenses. Studies spanned from the early 1990s to 2021, with the majority concluding after 2013. Seven different formats were used for the imaging data. Preprocessing and feature extraction were successfully performed for each dataset. Conclusion RadiomicsHub is a comprehensive public repository with radiomics features derived from a systematic review of public cancer imaging datasets. By converting all datasets to a standardized format and ensuring reproducible and traceable processing, RadiomicsHub addresses key reproducibility and standardization challenges in radiomics. Critical relevance statement This study critically addresses the challenges associated with locating, preprocessing, and extracting quantitative features from open-access datasets, to facilitate more robust and reliable evaluations of radiomics models. Key points - Through a systematic review, we identified 29 cancer imaging datasets suitable for radiomics research. - A public repository with collection overview and radiomics features, encompassing 10,354 patients and 49,515 scans, was compiled. - Most datasets can be shared, used, and built upon freely under a Creative Commons license. - All 29 identified datasets have been converted into a common format to enable reproducible radiomics feature extraction

    Transfer-learning deep radiomics and hand-crafted radiomics for classifying lymph nodes from contrast-enhanced computed tomography in lung cancer

    No full text
    Objectives: Positron emission tomography (PET) is currently considered the non-invasive reference standard for lymph node (N-)staging in lung cancer. However, not all patients can undergo this diagnostic procedure due to high costs, limited availability, and additional radiation exposure. The purpose of this study was to predict the PET result from traditional contrast-enhanced computed tomography (CT) and to test different feature extraction strategies. Methods: In this study, 100 lung cancer patients underwent a contrast-enhanced 18^{18}F-fluorodeoxyglucose (FDG) PET/CT scan between August 2012 and December 2019. We trained machine learning models to predict FDG uptake in the subsequent PET scan. Model inputs were composed of (i) traditional “hand-crafted” radiomics features from the segmented lymph nodes, (ii) deep features derived from a pretrained EfficientNet-CNN, and (iii) a hybrid approach combining (i) and (ii). Results: In total, 2734 lymph nodes [555 (20.3%) PET-positive] from 100 patients [49% female; mean age 65, SD: 14] with lung cancer (60% adenocarcinoma, 21% plate epithelial carcinoma, 8% small-cell lung cancer) were included in this study. The area under the receiver operating characteristic curve (AUC) ranged from 0.79 to 0.87, and the scaled Brier score (SBS) ranged from 16 to 36%. The random forest model (iii) yielded the best results [AUC 0.871 (0.865–0.878), SBS 35.8 (34.2–37.2)] and had significantly higher model performance than both approaches alone (AUC: p < 0.001, z = 8.8 and z = 22.4; SBS: p < 0.001, z = 11.4 and z = 26.6, against (i) and (ii), respectively). Conclusion: Both traditional radiomics features and transfer-learning deep radiomics features provide relevant and complementary information for non-invasive N-staging in lung cancer

    Chemosensor for Selective Determination of 2,4,6-Trinitrophenol Using a Custom Designed Imprinted Polymer Recognition Unit Cross-Linked to a Fluorophore Transducer

    No full text
    A 3-D molecularly imprinted polymer (MIP) film comprising a unit for recognition of 2,4,6-trinitrophenol (TNP) embedded with a fluorophore for signal transduction and quantification is newly fabricated and shown to be selective and sensitive to the target TNP analyte in solution. The limit of detection of this chemosensor reached a level of subnanogram per liter of TNP concentration. Moreover, this MIP film was fabricated by just one-step electropolymerization from a prepolymerization solution; therefore, the procedure is readily extendable for selective determination of other nitroaromatic explosives

    Automated Kidney and Liver Segmentation in MR Images in Patients with Autosomal Dominant Polycystic Kidney Disease: A Multicenter Study

    Get PDF
    Background Imaging-based total kidney volume (TKV) and total liver volume (TLV) are major prognostic factors in autosomal dominant polycystic kidney disease (ADPKD) and end points for clinical trials. However, volumetry is time consuming and reader dependent in clinical practice. Our aim was to develop a fully automated method for joint kidney and liver segmentation in magnetic resonance imaging (MRI) and to evaluate its performance in a multisequence, multicenter setting. Methods The convolutional neural network was trained on a large multicenter dataset consisting of 992 MRI scans of 327 patients. Manual segmentation delivered ground-truth labels. The model's performance was evaluated in a separate test dataset of 93 patients (350 MRI scans) as well as a heterogeneous external dataset of 831 MRI scans from 323 patients. Results The segmentation model yielded excellent performance, achieving a median per study Dice coefficient of 0.92-0.97 for the kidneys and 0.96 for the liver. Automatically computed TKV correlated highly with manual measurements (intraclass correlation coefficient [ICC]: 0.996-0.999) with low bias and high precision (-0.2%+/- 4% for axial images and 0.5%+/- 4% for coronal images). TLV estimation showed an ICC of 0.999 and bias/precision of -0.5%+/- 3%. For the external dataset, the automated TKV demonstrated bias and precision of -1%+/- 7%. Conclusions Our deep learning model enabled accurate segmentation of kidneys and liver and objective assessment of TKV and TLV. Importantly, this approach was validated with axial and coronal MRI scans from 40 different scanners, making implementation in clinical routine care feasible
    corecore