226 research outputs found

    An Entire Renal Anatomy Extraction Network for Advanced CAD During Partial Nephrectomy

    Full text link
    Partial nephrectomy (PN) is common surgery in urology. Digitization of renal anatomies brings much help to many computer-aided diagnosis (CAD) techniques during PN. However, the manual delineation of kidney vascular system and tumor on each slice is time consuming, error-prone, and inconsistent. Therefore, we proposed an entire renal anatomies extraction method from Computed Tomographic Angiographic (CTA) images fully based on deep learning. We adopted a coarse-to-fine workflow to extract target tissues: first, we roughly located the kidney region, and then cropped the kidney region for more detail extraction. The network we used in our workflow is based on 3D U-Net. To dealing with the imbalance of class contributions to loss, we combined the dice loss with focal loss, and added an extra weight to prevent excessive attention. We also improved the manual annotations of vessels by merging semi-trained model's prediction and original annotations under supervision. We performed several experiments to find the best-fitting combination of variables for training. We trained and evaluated the models on our 60 cases dataset with 3 different sources. The average dice score coefficient (DSC) of kidney, tumor, cyst, artery, and vein, were 90.9%, 90.0%, 89.2%, 80.1% and 82.2% respectively. Our modulate weight and hybrid strategy of loss function increased the average DSC of all tissues about 8-20%. Our optimization of vessel annotation improved the average DSC about 1-5%. We proved the efficiency of our network on renal anatomies segmentation. The high accuracy and fully automation make it possible to quickly digitize the personal renal anatomies, which greatly increases the feasibility and practicability of CAD application on urology surgery

    Patient-specific simulation environment for surgical planning and preoperative rehearsal

    Get PDF
    Surgical simulation is common practice in the fields of surgical education and training. Numerous surgical simulators are available from commercial and academic organisations for the generic modelling of surgical tasks. However, a simulation platform is still yet to be found that fulfils the key requirements expected for patient-specific surgical simulation of soft tissue, with an effective translation into clinical practice. Patient-specific modelling is possible, but to date has been time-consuming, and consequently costly, because data preparation can be technically demanding. This motivated the research developed herein, which addresses the main challenges of biomechanical modelling for patient-specific surgical simulation. A novel implementation of soft tissue deformation and estimation of the patient-specific intraoperative environment is achieved using a position-based dynamics approach. This modelling approach overcomes the limitations derived from traditional physically-based approaches, by providing a simulation for patient-specific models with visual and physical accuracy, stability and real-time interaction. As a geometrically- based method, a calibration of the simulation parameters is performed and the simulation framework is successfully validated through experimental studies. The capabilities of the simulation platform are demonstrated by the integration of different surgical planning applications that are found relevant in the context of kidney cancer surgery. The simulation of pneumoperitoneum facilitates trocar placement planning and intraoperative surgical navigation. The implementation of deformable ultrasound simulation can assist surgeons in improving their scanning technique and definition of an optimal procedural strategy. Furthermore, the simulation framework has the potential to support the development and assessment of hypotheses that cannot be tested in vivo. Specifically, the evaluation of feedback modalities, as a response to user-model interaction, demonstrates improved performance and justifies the need to integrate a feedback framework in the robot-assisted surgical setting.Open Acces

    ..

    Full text link
    [ES] El aumento de los procedimientos usando la robótica quirúrgica en la última década demanda un alto número de cirujanos, capaces de teleoperar sistemas avanzados y complejos y, al mismo tiempo, de aprovechar los beneficios de la Cirugía Asistida por Robot de forma segura y efectiva. En la actualidad, los planes de formación se basan en la Realidad Virtual y entornos simulados para lograr un establecimiento escalable, rentable y completo del conjunto de habilidades quirúrgicas robóticas. Este trabajo se centra en el desarrolloo de un una escenario clínico mediante sensores que asistan al ciruajano durante su entrenamiento con el daVinci®, implementados en un entorno físico impreso en 3D. Esta investigación busca la obtención de un modelo segmentado, la impresión 3D del modelo para simular el escenraio clínico real y así abituar al cirujano a la interacción de los órganos y tejidos con el robot; y la implementación de sensores con que asistir al cirjuano en el entrenamiento. Para ello, con el fin de demostrar la eficacia de la asistencia durante los entrenamientos, así como la validez de los ejercicios de la operación simulada se ha realizado un estudio con doce voluntarios.Tanto la asistencia visual como el uso de fantomas 3D muestran ser una alternativa óptima para el aprendizaje de la habilidades requeridas en la cirugía robótica: manifestandose un paso adelante hacia un entrenamiento personlizado para cada cirujano.[EN] The increase of surgical procedures using robotic technology in the last decade demands a high number of surgeons capable of teleoperating advanced and complex systems while safely and effectively taking advantage of Robot-Assisted Surgery benefits. Currently, training plans rely on Virtual Reality and simulated environments to achieve a scalable, cost-effective, and comprehensive establishment of robotic surgical skills. This work focuses on the development of a clinical scenario through sensors that assist the surgeon during their training with the daVinci® system, implemented in a 3D-printed physical environment. This research aims to obtain a segmented model, 3D printing the model to simulate the real clinical scenario, thus familiarizing the surgeon with the interaction of organs and tissues with the robot. Additionally, sensors are implemented to assist the surgeon during training. Therefore, to demonstrate the effectiveness of the assistance during the training sessions and the validity of the exercises in the simulated operation, a study was conducted with twelve volunteers. Both visual assistance and the use of 3D phantoms prove to be an optimal alternative for learning the required skills in robotic surgery, representing a significant step forward towards personalized training for each surgeon.Castillo Rosique, P. (2023). Development sensorized 3D-printed realistic phantom to scale for surgical training with a daVinci robot. Universitat Politècnica de València. http://hdl.handle.net/10251/19804

    Machine learning approaches for lung cancer diagnosis.

    Get PDF
    The enormity of changes and development in the field of medical imaging technology is hard to fathom, as it does not just represent the technique and process of constructing visual representations of the body from inside for medical analysis and to reveal the internal structure of different organs under the skin, but also it provides a noninvasive way for diagnosis of various disease and suggest an efficient ways to treat them. While data surrounding all of our lives are stored and collected to be ready for analysis by data scientists, medical images are considered a rich source that could provide us with a huge amount of data, that could not be read easily by physicians and radiologists, with valuable information that could be used in smart ways to discover new knowledge from these vast quantities of data. Therefore, the design of computer-aided diagnostic (CAD) system, that can be approved for use in clinical practice that aid radiologists in diagnosis and detecting potential abnormalities, is of a great importance. This dissertation deals with the development of a CAD system for lung cancer diagnosis, which is the second most common cancer in men after prostate cancer and in women after breast cancer. Moreover, lung cancer is considered the leading cause of cancer death among both genders in USA. Recently, the number of lung cancer patients has increased dramatically worldwide and its early detection doubles a patient’s chance of survival. Histological examination through biopsies is considered the gold standard for final diagnosis of pulmonary nodules. Even though resection of pulmonary nodules is the ideal and most reliable way for diagnosis, there is still a lot of different methods often used just to eliminate the risks associated with the surgical procedure. Lung nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. A pulmonary nodule is the first indication to start diagnosing lung cancer. Lung nodules can be benign (normal subjects) or malignant (cancerous subjects). Large (generally defined as greater than 2 cm in diameter) malignant nodules can be easily detected with traditional CT scanning techniques. However, the diagnostic options for small indeterminate nodules are limited due to problems associated with accessing small tumors. Therefore, additional diagnostic and imaging techniques which depends on the nodules’ shape and appearance are needed. The ultimate goal of this dissertation is to develop a fast noninvasive diagnostic system that can enhance the accuracy measures of early lung cancer diagnosis based on the well-known hypotheses that malignant nodules have different shape and appearance than benign nodules, because of the high growth rate of the malignant nodules. The proposed methodologies introduces new shape and appearance features which can distinguish between benign and malignant nodules. To achieve this goal a CAD system is implemented and validated using different datasets. This CAD system uses two different types of features integrated together to be able to give a full description to the pulmonary nodule. These two types are appearance features and shape features. For the appearance features different texture appearance descriptors are developed, namely the 3D histogram of oriented gradient, 3D spherical sector isosurface histogram of oriented gradient, 3D adjusted local binary pattern, 3D resolved ambiguity local binary pattern, multi-view analytical local binary pattern, and Markov Gibbs random field. Each one of these descriptors gives a good description for the nodule texture and the level of its signal homogeneity which is a distinguishable feature between benign and malignant nodules. For the shape features multi-view peripheral sum curvature scale space, spherical harmonics expansions, and different group of fundamental geometric features are utilized to describe the nodule shape complexity. Finally, the fusion of different combinations of these features, which is based on two stages is introduced. The first stage generates a primary estimation for every descriptor. Followed by the second stage that consists of an autoencoder with a single layer augmented with a softmax classifier to provide us with the ultimate classification of the nodule. These different combinations of descriptors are combined into different frameworks that are evaluated using different datasets. The first dataset is the Lung Image Database Consortium which is a benchmark publicly available dataset for lung nodule detection and diagnosis. The second dataset is our local acquired computed tomography imaging data that has been collected from the University of Louisville hospital and the research protocol was approved by the Institutional Review Board at the University of Louisville (IRB number 10.0642). These frameworks accuracy was about 94%, which make the proposed frameworks demonstrate promise to be valuable tool for the detection of lung cancer

    Advancements and Breakthroughs in Ultrasound Imaging

    Get PDF
    Ultrasonic imaging is a powerful diagnostic tool available to medical practitioners, engineers and researchers today. Due to the relative safety, and the non-invasive nature, ultrasonic imaging has become one of the most rapidly advancing technologies. These rapid advances are directly related to the parallel advancements in electronics, computing, and transducer technology together with sophisticated signal processing techniques. This book focuses on state of the art developments in ultrasonic imaging applications and underlying technologies presented by leading practitioners and researchers from many parts of the world

    A Robust Deep Learning Method with Uncertainty Estimation for the Pathological Classification of Renal Cell Carcinoma based on CT Images

    Full text link
    Objectives To develop and validate a deep learning-based diagnostic model incorporating uncertainty estimation so as to facilitate radiologists in the preoperative differentiation of the pathological subtypes of renal cell carcinoma (RCC) based on CT images. Methods Data from 668 consecutive patients, pathologically proven RCC, were retrospectively collected from Center 1. By using five-fold cross-validation, a deep learning model incorporating uncertainty estimation was developed to classify RCC subtypes into clear cell RCC (ccRCC), papillary RCC (pRCC), and chromophobe RCC (chRCC). An external validation set of 78 patients from Center 2 further evaluated the model's performance. Results In the five-fold cross-validation, the model's area under the receiver operating characteristic curve (AUC) for the classification of ccRCC, pRCC, and chRCC was 0.868 (95% CI: 0.826-0.923), 0.846 (95% CI: 0.812-0.886), and 0.839 (95% CI: 0.802-0.88), respectively. In the external validation set, the AUCs were 0.856 (95% CI: 0.838-0.882), 0.787 (95% CI: 0.757-0.818), and 0.793 (95% CI: 0.758-0.831) for ccRCC, pRCC, and chRCC, respectively. Conclusions The developed deep learning model demonstrated robust performance in predicting the pathological subtypes of RCC, while the incorporated uncertainty emphasized the importance of understanding model confidence, which is crucial for assisting clinical decision-making for patients with renal tumors. Clinical relevance statement Our deep learning approach, integrated with uncertainty estimation, offers clinicians a dual advantage: accurate RCC subtype predictions complemented by diagnostic confidence references, promoting informed decision-making for patients with RCC.Comment: 16 pages, 6 figure

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Image Processing and Analysis for Preclinical and Clinical Applications

    Get PDF
    Radiomics is one of the most successful branches of research in the field of image processing and analysis, as it provides valuable quantitative information for the personalized medicine. It has the potential to discover features of the disease that cannot be appreciated with the naked eye in both preclinical and clinical studies. In general, all quantitative approaches based on biomedical images, such as positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI), have a positive clinical impact in the detection of biological processes and diseases as well as in predicting response to treatment. This Special Issue, “Image Processing and Analysis for Preclinical and Clinical Applications”, addresses some gaps in this field to improve the quality of research in the clinical and preclinical environment. It consists of fourteen peer-reviewed papers covering a range of topics and applications related to biomedical image processing and analysis

    Automated Decision Support System for Traumatic Injuries

    Full text link
    With trauma being one of the leading causes of death in the U.S., automated decision support systems that can accurately detect traumatic injuries and predict their outcomes are crucial for preventing secondary injuries and guiding care management. My dissertation research incorporates machine learning and image processing techniques to extract knowledge from structured (e.g., electronic health records) and unstructured (e.g., computed tomography images) data to generate real-time, robust, quantitative trauma diagnosis and prognosis. This work addresses two challenges: 1) incorporating clinical domain knowledge into deep convolutional neural networks using classical image processing techniques and 2) using post-hoc explainers to align black box predictive machine learning models with clinical domain knowledge. Addressing these challenges is necessary for developing trustworthy clinical decision-support systems that can be generalized across the healthcare system. Motivated by this goal, we introduce an explainable and expert-guided machine learning framework to predict the outcome of traumatic brain injury. We also propose image processing approaches to automatically assess trauma from computed tomography scans. This research comprises four projects. In the first project, we propose an explainable hierarchical machine learning framework to predict the long-term functional outcome of traumatic brain injury using information available in electronic health records. This information includes demographic data, baseline features, radiology reports, laboratory values, injury severity scores, and medical history. To build such a framework, we peer inside the black-box machine learning models to explain their rationale for each predicted risk score. Accordingly, additional layers of statistical inference and human expert validation are added to the model, which ensures the predicted risk score’s trustworthiness. We demonstrate that imposing statistical and domain knowledge “checks and balances” not only does not adversely affect the performance of the machine learning classifier but also makes it more reliable. In the second project, we introduce a framework for detecting and assessing the severity of brain subdural hematomas. First, the hematoma is segmented using a combination of hand-crafted and deep learning features. Next, we calculate the volume of the injured region to quantitatively assess its severity. We show that the combination of classical image processing and deep learning can outperform deep-learning-only methods to achieve improved average performance and robustness. In the third project, we develop a framework to identify and assess liver trauma by calculating the percentage of the liver parenchyma disrupted by trauma. First, liver parenchyma and trauma masks are segmented by employing a deep learning backbone. Next, these segmented regions are refined with respect to the domain knowledge about the location and intensity distribution of liver trauma. This framework accurately estimated the severity of liver parenchyma trauma. In the final project, we propose a kidney segmentation method for patients with blunt abdominal trauma. This model incorporates machine learning and active contour modeling to generate kidney masks on abdominal CT images. The resultant of this component can provide a region of interest for screening kidney traumas in future studies. Together, the four projects discussed in this thesis contribute to diagnosis and prognosis of trauma across multiple body regions. They provide a quantitative assessment of traumas that is a more accurate measurement of the risk for adverse health outcomes as an alternative to current qualitative and sometimes subjective current clinical practice.PHDBioinformaticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168065/1/negarf_1.pd

    Nephroblastoma in MRI Data

    Get PDF
    The main objective of this work is the mathematical analysis of nephroblastoma in MRI sequences. At the beginning we provide two different datasets for segmentation and classification. Based on the first dataset, we analyze the current clinical practice regarding therapy planning on the basis of annotations of a single radiologist. We can show with our benchmark that this approach is not optimal and that there may be significant differences between human annotators and even radiologists. In addition, we demonstrate that the approximation of the tumor shape currently used is too coarse granular and thus prone to errors. We address this problem and develop a method for interactive segmentation that allows an intuitive and accurate annotation of the tumor. While the first part of this thesis is mainly concerned with the segmentation of Wilms’ tumors, the second part deals with the reliability of diagnosis and the planning of the course of therapy. The second data set we compiled allows us to develop a method that dramatically improves the differential diagnosis between nephroblastoma and its precursor lesion nephroblastomatosis. Finally, we can show that even the standard MRI modality for Wilms’ tumors is sufficient to estimate the developmental tendencies of nephroblastoma under chemotherapy
    corecore