14 research outputs found

    Kidney and Kidney-tumor Segmentation Using Cascaded V-Nets

    Get PDF
    Kidney cancer is the seventh most common cancer worldwide, accounting for an estimated 140,000 global deaths annually. Kidney segmentation in volumetric medical images plays an important role in clinical diagnosis, radiotherapy planning, interventional guidance and patient follow-ups however, to our knowledge, there is no automatic kidneytumor segmentation method present in the literature. In this paper, we address the challenge of simultaneous semantic segmentation of kidney and tumor by adopting a cascaded V-Net framework. The first V-Net in our pipeline produces a region of interest around the probable location of the kidney and tumor, which facilitates the removal of the unwanted region in the CT volume. The second sets of V-Nets are trained separately for the kidney and tumor, which produces the kidney and tumor masks respectively. The final segmentation is achieved by combining the kidney and tumor mask together. Our method is trained and validated on 190 and 20 patients scans, respectively, accesses from 2019 Kidney Tumor Segmentation Challenge database. We achieved a validation accuracy in terms of the Sørensen Dice coefficient of about 97%

    Hyper Vision Net: Kidney Tumor Segmentation Using Coordinate Convolutional Layer and Attention Unit

    Get PDF
    KiTs19 challenge paves the way to haste the improvement of solid kidney tumor semantic segmentation methodologies. Accurate segmentation of kidney tumor in computer tomography (CT) images is a challenging task due to the non-uniform motion, similar appearance and various shape. Inspired by this fact, in this manuscript, we present a novel kidney tumor segmentation method using deep learning network termed as Hyper vision Net model. All the existing U-net models are using a modified version of U-net to segment the kidney tumor region. In the proposed architecture, we introduced supervision layers in the decoder part, and it refines even minimal regions in the output. A dataset consists of real arterial phase abdominal CT scans of 300 patients, including 45964 images has been provided from KiTs19 for training and validation of the proposed model. Compared with the state-of-the-art segmentation methods, the results demonstrate the superiority of our approach on training dice value score of 0.9552 and 0.9633 in tumor region and kidney region, respectively

    Segmentation of kidney and renal collecting system on 3D computed tomography images

    Get PDF
    Surgical training for minimal invasive kidney interventions (MIKI) has huge importance within the urology field. Within this topic, simulate MIKI in a patient-specific virtual environment can be used for pre-operative planning using the real patient's anatomy, possibly resulting in a reduction of intra-operative medical complications. However, the validated VR simulators perform the training in a group of standard models and do not allow patient-specific training. For a patient-specific training, the standard simulator would need to be adapted using personalized models, which can be extracted from pre-operative images using segmentation strategies. To date, several methods have already been proposed to accurately segment the kidney in computed tomography (CT) images. However, most of these works focused on kidney segmentation only, neglecting the extraction of its internal compartments. In this work, we propose to adapt a coupled formulation of the B-Spline Explicit Active Surfaces (BEAS) framework to simultaneously segment the kidney and the renal collecting system (CS) from CT images. Moreover, from the difference of both kidney and CS segmentations, one is able to extract the renal parenchyma also. The segmentation process is guided by a new energy functional that combines both gradient and region-based energies. The method was evaluated in 10 kidneys from 5 CT datasets, with different image properties. Overall, the results demonstrate the accuracy of the proposed strategy, with a Dice overlap of 92.5%, 86.9% and 63.5%, and a point-to-surface error around 1.6 mm, 1.9 mm and 4 mm for the kidney, renal parenchyma and CS, respectively.NORTE-01-0145-FEDER0000I3, and NORTE-01-0145-FEDER-024300, supported by Northern Portugal Regional Operational Programme (Norte2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (FEDER), and also been funded by FEDER funds, through Competitiveness Factors Operational Programme (COMPETE), and by national funds, through the FCT-Fundacao para a Ciência e Tecnologia, under the scope of the project POCI-01-0145-FEDER-007038. The authors acknowledge FCT-Fundação para a Ciância e a Tecnologia, Portugal, and the European Social Found, European Union, for funding support through the Programa Operacional Capital Humano (POCH).info:eu-repo/semantics/publishedVersio

    Real-time computer-aided diagnosis of focal pancreatic masses from endoscopic ultrasound imaging based on a hybrid convolutional and long short-term memory neural network model

    Get PDF
    Differential diagnosis of focal pancreatic masses is based on endoscopic ultrasound (EUS) guided fine needle aspiration biopsy (EUS-FNA/FNB). Several imaging techniques (i.e. gray-scale, color Doppler, contrast-enhancement and elastography) are used for differential diagnosis. However, diagnosis remains highly operator dependent. To address this problem, machine learning algorithms (MLA) can generate an automatic computer-aided diagnosis (CAD) by analyzing a large number of clinical images in real-time. We aimed to develop a MLA to characterize focal pancreatic masses during the EUS procedure. The study included 65 patients with focal pancreatic masses, with 20 EUS images selected from each patient (grayscale, color Doppler, arterial and venous phase contrast-enhancement and elastography). Images were classified based on cytopathology exam as: chronic pseudotumoral pancreatitis (CPP), neuroendocrine tumor (PNET) and ductal adenocarcinoma (PDAC). The MLA is based on a deep learning method which combines convolutional (CNN) and long short-term memory (LSTM) neural networks. 2688 images were used for training and 672 images for testing the deep learning models. The CNN was developed to identify the discriminative features of images, while a LSTM neural network was used to extract the dependencies between images. The model predicted the clinical diagnosis with an area under curve index of 0.98 and an overall accuracy of 98.26%. The negative (NPV) and positive (PPV) predictive values and the corresponding 95% confidential intervals (CI) are 96.7%, [94.5, 98.9] and 98.1%, [96.81, 99.4] for PDAC, 96.5%, [94.1, 98.8], and 99.7%, [99.3, 100] for CPP, and 98.9%, [97.5, 100] and 98.3%, [97.1, 99.4] for PNET. Following further validation on a independent test cohort, this method could become an efficient CAD tool to differentiate focal pancreatic masses in real-time

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201
    corecore