7 research outputs found

    Neural network based ensemble model to predict radiation induced lymphopenia after concurrent chemo-radiotherapy for non-small cell lung cancer from two institutions

    No full text
    The use of adjuvant Immune Checkpoint Inhibitors (ICI) after concurrent chemo-radiation therapy (CCRT) has become the standard of care for locally advanced non-small cell lung cancer (LA-NSCLC). However, prolonged radiotherapy regimens are known to cause radiation-induced lymphopenia (RIL), a long-neglected toxicity that has been shown to correlate with response to ICIs and survival of patients treated with adjuvant ICI after CCRT.In this study, we aim to develop a novel neural network (NN) approach that integrates patient characteristics, treatment related variables, and differential dose volume histograms (dDVH) of lung and heart to predict the incidence of RIL at the end of treatment. Multi-institutional data of 139 LA-NSCLC patients from two hospitals were collected for training and validation of our suggested model. Ensemble learning was combined with a bootstrap strategy to stabilize the model, which was evaluated internally using repeated cross validation.The performance of our proposed model was compared to conventional models using the same input features, such as Logistic Regression (LR) and Random Forests (RF), using the Area Under the Curve (AUC) of Receiver Operating Characteristics (ROC) curves. Our suggested model (AUC=0.77) outperformed the comparison models (AUC=0.72, 0.74) in terms of absolute performance, indicating that the convolutional structure of the network successfully abstracts additional information from the differential DVHs, which we studied using Gradient-weighted Class Activation Map.This study shows that clinical factors combined with dDVHs can be used to predict the risk of RIL for an individual patient and shows a path toward preventing lymphopenia using patient-specific modifications of the radiotherapy plan

    The role of computational methods for automating and improving clinical target volume definition

    No full text
    Treatment planning in radiotherapy distinguishes three target volume concepts: the gross tumor volume (GTV), the clinical target volume (CTV), and the planning target volume (PTV). Over time, GTV definition and PTV margins have improved through the development of novel imaging techniques and better image guidance, respectively. CTV definition is sometimes considered the weakest element in the planning process. CTV definition is particularly complex since the extension of microscopic disease cannot be seen using currently available in-vivo imaging techniques. Instead, CTV definition has to incorporate knowledge of the patterns of tumor progression. While CTV delineation has largely been considered the domain of radiation oncologists, this paper, arising from a 2019 ESTRO Physics research workshop, discusses the contributions that medical physics and computer science can make by developing computational methods to support CTV definition. First, we overview the role of image segmentation algorithms, which may in part automate CTV delineation through segmentation of lymph node stations or normal tissues representing anatomical boundaries of microscopic tumor progression. The recent success of deep convolutional neural networks has also enabled learning entire CTV delineations from examples. Second, we discuss the use of mathematical models of tumor progression for CTV definition, using as example the application of glioma growth models to facilitate GTV-to-CTV expansion for glioblastoma that is consistent with neuroanatomy. We further consider statistical machine learning models to quantify lymphatic metastatic progression of tumors, which may eventually improve elective CTV definition. Lastly, we discuss approaches to incorporate uncertainty in CTV definition into treatment plan optimization as well as general limitations of the CTV concept in the case of infiltrating tumors without natural boundaries

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
    corecore