85 research outputs found

    FPGA-Based Portable Ultrasound Scanning System with Automatic Kidney Detection

    Get PDF
    Bedsides diagnosis using portable ultrasound scanning (PUS) offering comfortable diagnosis with various clinical advantages, in general, ultrasound scanners suffer from a poor signal-to-noise ratio, and physicians who operate the device at point-of-care may not be adequately trained to perform high level diagnosis. Such scenarios can be eradicated by incorporating ambient intelligence in PUS. In this paper, we propose an architecture for a PUS system, whose abilities include automated kidney detection in real time. Automated kidney detection is performed by training the Viola–Jones algorithm with a good set of kidney data consisting of diversified shapes and sizes. It is observed that the kidney detection algorithm delivers very good performance in terms of detection accuracy. The proposed PUS with kidney detection algorithm is implemented on a single Xilinx Kintex-7 FPGA, integrated with a Raspberry Pi ARM processor running at 900 MHz

    Segmentation of kidney and renal collecting system on 3D computed tomography images

    Get PDF
    Surgical training for minimal invasive kidney interventions (MIKI) has huge importance within the urology field. Within this topic, simulate MIKI in a patient-specific virtual environment can be used for pre-operative planning using the real patient's anatomy, possibly resulting in a reduction of intra-operative medical complications. However, the validated VR simulators perform the training in a group of standard models and do not allow patient-specific training. For a patient-specific training, the standard simulator would need to be adapted using personalized models, which can be extracted from pre-operative images using segmentation strategies. To date, several methods have already been proposed to accurately segment the kidney in computed tomography (CT) images. However, most of these works focused on kidney segmentation only, neglecting the extraction of its internal compartments. In this work, we propose to adapt a coupled formulation of the B-Spline Explicit Active Surfaces (BEAS) framework to simultaneously segment the kidney and the renal collecting system (CS) from CT images. Moreover, from the difference of both kidney and CS segmentations, one is able to extract the renal parenchyma also. The segmentation process is guided by a new energy functional that combines both gradient and region-based energies. The method was evaluated in 10 kidneys from 5 CT datasets, with different image properties. Overall, the results demonstrate the accuracy of the proposed strategy, with a Dice overlap of 92.5%, 86.9% and 63.5%, and a point-to-surface error around 1.6 mm, 1.9 mm and 4 mm for the kidney, renal parenchyma and CS, respectively.NORTE-01-0145-FEDER0000I3, and NORTE-01-0145-FEDER-024300, supported by Northern Portugal Regional Operational Programme (Norte2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (FEDER), and also been funded by FEDER funds, through Competitiveness Factors Operational Programme (COMPETE), and by national funds, through the FCT-Fundacao para a Ciência e Tecnologia, under the scope of the project POCI-01-0145-FEDER-007038. The authors acknowledge FCT-Fundação para a Ciância e a Tecnologia, Portugal, and the European Social Found, European Union, for funding support through the Programa Operacional Capital Humano (POCH).info:eu-repo/semantics/publishedVersio

    Identification of space-occupying lesions in medical imaging of the kidney: A review.

    Get PDF
    Usually, the kidneys can be affected by renal masses or space-occupying lesions (LOE). When reference is made to the term renal mass, all benign and malignant processes that occupy, distort and affect the renal parenchyma and its environment are included, regardless of etiology, shape and volume. Therefore, renal masses include all cystic formations (abscesses), calculi, pseudotumors, neoplasms, inflammatory diseases and traumatic lesions. Thus, for the evaluation of cystic renal masses in medical imaging, according to their characteristics such as their wall (thin, irregular, thickened), septa (thin, irregular, thickened), borders (defined or not) and size, classifications such as Bosniak's classification shown in Table 1 are used, which classifies renal cysts into five categories based on the appearance of the image, to help predict whether it is a benign or malignant tumor

    Segmentation of Kidney and Renal Tumor in CT Scans Using Convolutional Networks

    Get PDF
    Accurate segmentation of kidney and renal tumor in CT images is a prerequisite step in surgery planning. However, this task remains a challenge. In this report, we use convolutional networks (ConvNet) to automatically segment kidney and renal tumor. Specifically, we adopt a 2D ConvNet to select a range of slices to be segmented in the inference phase for accelerating segmentation, while a 3D ConvNet is trained to segment regions of interest in the above narrow range. In localization phase, CT images from several publicly available datasets were used for learning localizer. This localizer aims to filter out slices impossible containing kidney and renal tumor, and it was fine-tuned from AlexNet pre-trained on ImageNet. In segmentation phase, a simple U-net with large patch size (160×160×80) was trained to delineate contours of kidney and renal tumor. In the 2019 MICCAI Kidney Tumor Segmentation (KiTS19) Challenge, 5-fold cross-validation was performed on the training set. 168 (80%) CT scans were used for training and remaining 42 (20%) cases were used for validation. The resulting average Dice similarity coefficients are 0.9662 and 0.7905 for kidney and renal tumor, respectively

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    A non-invasive image based system for early diagnosis of prostate cancer.

    Get PDF
    Prostate cancer is the second most fatal cancer experienced by American males. The average American male has a 16.15% chance of developing prostate cancer, which is 8.38% higher than lung cancer, the second most likely cancer. The current in-vitro techniques that are based on analyzing a patients blood and urine have several limitations concerning their accuracy. In addition, the prostate Specific Antigen (PSA) blood-based test, has a high chance of false positive diagnosis, ranging from 28%-58%. Yet, biopsy remains the gold standard for the assessment of prostate cancer, but only as the last resort because of its invasive nature, high cost, and potential morbidity rates. The major limitation of the relatively small needle biopsy samples is the higher possibility of producing false positive diagnosis. Moreover, the visual inspection system (e.g., Gleason grading system) is not quantitative technique and different observers may classify a sample differently, leading to discrepancies in the diagnosis. As reported in the literature that the early detection of prostate cancer is a crucial step for decreasing prostate cancer related deaths. Thus, there is an urgent need for developing objective, non-invasive image based technology for early detection of prostate cancer. The objective of this dissertation is to develop a computer vision methodology, later translated into a clinically usable software tool, which can improve sensitivity and specificity of early prostate cancer diagnosis based on the well-known hypothesis that malignant tumors are will connected with the blood vessels than the benign tumors. Therefore, using either Diffusion Weighted Magnetic Resonance imaging (DW-MRI) or Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI), we will be able to interrelate the amount of blood in the detected prostate tumors by estimating either the Apparent Diffusion Coefficient (ADC) in the prostate with the malignancy of the prostate tumor or perfusion parameters. We intend to validate this hypothesis by demonstrating that automatic segmentation of the prostate from either DW-MRI or DCE-MRI after handling its local motion, provides discriminatory features for early prostate cancer diagnosis. The proposed CAD system consists of three majors components, the first two of which constitute new research contributions to a challenging computer vision problem. The three main components are: (1) A novel Shape-based segmentation approach to segment the prostate from either low contrast DW-MRI or DCE-MRI data; (2) A novel iso-contours-based non-rigid registration approach to ensure that we have voxel-on-voxel matches of all data which may be more difficult due to gross patient motion, transmitted respiratory effects, and intrinsic and transmitted pulsatile effects; and (3) Probabilistic models for the estimated diffusion and perfusion features for both malignant and benign tumors. Our results showed a 98% classification accuracy using Leave-One-Subject-Out (LOSO) approach based on the estimated ADC for 30 patients (12 patients diagnosed as malignant; 18 diagnosed as benign). These results show the promise of the proposed image-based diagnostic technique as a supplement to current technologies for diagnosing prostate cancer

    Analysis of contrast-enhanced medical images.

    Get PDF
    Early detection of human organ diseases is of great importance for the accurate diagnosis and institution of appropriate therapies. This can potentially prevent progression to end-stage disease by detecting precursors that evaluate organ functionality. In addition, it also assists the clinicians for therapy evaluation, tracking diseases progression, and surgery operations. Advances in functional and contrast-enhanced (CE) medical images enabled accurate noninvasive evaluation of organ functionality due to their ability to provide superior anatomical and functional information about the tissue-of-interest. The main objective of this dissertation is to develop a computer-aided diagnostic (CAD) system for analyzing complex data from CE magnetic resonance imaging (MRI). The developed CAD system has been tested in three case studies: (i) early detection of acute renal transplant rejection, (ii) evaluation of myocardial perfusion in patients with ischemic heart disease after heart attack; and (iii), early detection of prostate cancer. However, developing a noninvasive CAD system for the analysis of CE medical images is subject to multiple challenges, including, but are not limited to, image noise and inhomogeneity, nonlinear signal intensity changes of the images over the time course of data acquisition, appearances and shape changes (deformations) of the organ-of-interest during data acquisition, determination of the best features (indexes) that describe the perfusion of a contrast agent (CA) into the tissue. To address these challenges, this dissertation focuses on building new mathematical models and learning techniques that facilitate accurate analysis of CAs perfusion in living organs and include: (i) accurate mathematical models for the segmentation of the object-of-interest, which integrate object shape and appearance features in terms of pixel/voxel-wise image intensities and their spatial interactions; (ii) motion correction techniques that combine both global and local models, which exploit geometric features, rather than image intensities to avoid problems associated with nonlinear intensity variations of the CE images; (iii) fusion of multiple features using the genetic algorithm. The proposed techniques have been integrated into CAD systems that have been tested in, but not limited to, three clinical studies. First, a noninvasive CAD system is proposed for the early and accurate diagnosis of acute renal transplant rejection using dynamic contrast-enhanced MRI (DCE-MRI). Acute rejection–the immunological response of the human immune system to a foreign kidney–is the most sever cause of renal dysfunction among other diagnostic possibilities, including acute tubular necrosis and immune drug toxicity. In the U.S., approximately 17,736 renal transplants are performed annually, and given the limited number of donors, transplanted kidney salvage is an important medical concern. Thus far, biopsy remains the gold standard for the assessment of renal transplant dysfunction, but only as the last resort because of its invasive nature, high cost, and potential morbidity rates. The diagnostic results of the proposed CAD system, based on the analysis of 50 independent in-vivo cases were 96% with a 95% confidence interval. These results clearly demonstrate the promise of the proposed image-based diagnostic CAD system as a supplement to the current technologies, such as nuclear imaging and ultrasonography, to determine the type of kidney dysfunction. Second, a comprehensive CAD system is developed for the characterization of myocardial perfusion and clinical status in heart failure and novel myoregeneration therapy using cardiac first-pass MRI (FP-MRI). Heart failure is considered the most important cause of morbidity and mortality in cardiovascular disease, which affects approximately 6 million U.S. patients annually. Ischemic heart disease is considered the most common underlying cause of heart failure. Therefore, the detection of the heart failure in its earliest forms is essential to prevent its relentless progression to premature death. While current medical studies focus on detecting pathological tissue and assessing contractile function of the diseased heart, this dissertation address the key issue of the effects of the myoregeneration therapy on the associated blood nutrient supply. Quantitative and qualitative assessment in a cohort of 24 perfusion data sets demonstrated the ability of the proposed framework to reveal regional perfusion improvements with therapy, and transmural perfusion differences across the myocardial wall; thus, it can aid in follow-up on treatment for patients undergoing the myoregeneration therapy. Finally, an image-based CAD system for early detection of prostate cancer using DCE-MRI is introduced. Prostate cancer is the most frequently diagnosed malignancy among men and remains the second leading cause of cancer-related death in the USA with more than 238,000 new cases and a mortality rate of about 30,000 in 2013. Therefore, early diagnosis of prostate cancer can improve the effectiveness of treatment and increase the patient’s chance of survival. Currently, needle biopsy is the gold standard for the diagnosis of prostate cancer. However, it is an invasive procedure with high costs and potential morbidity rates. Additionally, it has a higher possibility of producing false positive diagnosis due to relatively small needle biopsy samples. Application of the proposed CAD yield promising results in a cohort of 30 patients that would, in the near future, represent a supplement of the current technologies to determine prostate cancer type. The developed techniques have been compared to the state-of-the-art methods and demonstrated higher accuracy as shown in this dissertation. The proposed models (higher-order spatial interaction models, shape models, motion correction models, and perfusion analysis models) can be used in many of today’s CAD applications for early detection of a variety of diseases and medical conditions, and are expected to notably amplify the accuracy of CAD decisions based on the automated analysis of CE images

    A novel NMF-based DWI CAD framework for prostate cancer.

    Get PDF
    In this thesis, a computer aided diagnostic (CAD) framework for detecting prostate cancer in DWI data is proposed. The proposed CAD method consists of two frameworks that use nonnegative matrix factorization (NMF) to learn meaningful features from sets of high-dimensional data. The first technique, is a three dimensional (3D) level-set DWI prostate segmentation algorithm guided by a novel probabilistic speed function. This speed function is driven by the features learned by NMF from 3D appearance, shape, and spatial data. The second technique, is a probabilistic classifier that seeks to label a prostate segmented from DWI data as either alignat, contain cancer, or benign, containing no cancer. This approach uses a NMF-based feature fusion to create a feature space where data classes are clustered. In addition, using DWI data acquired at a wide range of b-values (i.e. magnetic field strengths) is investigated. Experimental analysis indicates that for both of these frameworks, using NMF producing more accurate segmentation and classification results, respectively, and that combining the information from DWI data at several b-values can assist in detecting prostate cancer

    U-Net based deep convolutional neural network models for liver segmentation from CT scan images

    Get PDF
    Liver segmentation is a critical task for diagnosis, treatment and follow-up processes of liver cancer. Computed Tomography (CT) scans are the common medical image modality for the segmentation task. Liver segmentation is considered a very hard task for many reasons. Medical images are limited for researchers. Liver shape is changing based on the patient position during the CT scan process, and varies from patient to another based on the health conditions. Liver and other organs, for example heart, stomach, and pancreas, share similar gray scale range in CT images. Liver treatment using surgery operations is very critical because liver contains significant amount of blood and the position of liver is very close to critical organs like heart, lungs, stomach, and crucial blood veins. Therefore the accuracy of segmentation is critical to define liver and tumors shape and position especially when the treatment surgery conducted using radio frequency heating or cryoablation needles. In the literature, convolutional neural networks (CNN) have achieved very high accuracy on liver segmentation and the U-Net model is considered the state-of-the-art for the medical image segmentation task. Many researchers have developed CNN models based on U-Net and stacked U-Nets with/without bridged connections. However, CNN models need significant number of labeled samples for training and validation which is not commonly available in the case of liver CT images. The process of generating manual annotated masks for the training samples are time consuming and need involvement of expert clinical doctors. Data augmentation has thus been widely used in boosting the sample size for model training. Using rotation with steps of 15o and horizontal and vertical flipping as augmentation techniques, the lack of dataset and training samples issue is solved. The choice of rotation and flipping because in the real life situations, most of the CT scans recorded while the while patient lies on face down or with 45o, 60o,90o on right side according to the location of the tumor. Nonetheless, such process has brought up a new issue for liver segmentation. For example, due to the augmentation operations of rotation and flipping, the trained model detected part of the heart as a liver when it is on the wrong side of the body. The first part of this research conducted an extensive experimental study of U-Net based model in terms of deeper and wider, and variant bridging and skip-connections in order to give recommendation for using U-Net based models. Top-down and bottom-up approaches were used to construct variations of deeper models, whilst two, three, and four stacked U-Nets were applied to construct the wider U-Net models. The variation of the skip connections between two and three U-Nets are the key factors in the study. The proposed model used 2 bridged U-Nets with three extra skip connections between the U-Nets to overcome the flipping issue. A new loss function based on minimizing the distance between the center of mass between the predicted blobs has also enhanced the liver segmentation accuracy. Finally, the deep-supervision concept was integrated with the new loss functions where the total loss was calculated as the sum of weighted loss functions over each weighted deeply supervision. It has achieved a segmentation accuracy of up to 90%. The proposed model of 2 bridged U-Nets with compound skip-connections and specific number of levels, layers, filters, and image size has increased the accuracy of liver segmentation to ~90% whereas the original U-Net and bridged nets have recorded a segmentation accuracy of ~85%. Although applying extra deeply supervised layers and weighted compound of dice coefficient and centroid loss functions solved the flipping issue with ~93%, there is still a room for improving the accuracy by applying some image enhancement as pre-processing stage

    腹部CT像上の複数オブジェクトのセグメンテーションのための統計的手法に関する研究

    Get PDF
    Computer aided diagnosis (CAD) is the use of a computer-generated output as an auxiliary tool for the assistance of efficient interpretation and accurate diagnosis. Medical image segmentation has an essential role in CAD in clinical applications. Generally, the task of medical image segmentation involves multiple objects, such as organs or diffused tumor regions. Moreover, it is very unfavorable to segment these regions from abdominal Computed Tomography (CT) images because of the overlap in intensity and variability in position and shape of soft tissues. In this thesis, a progressive segmentation framework is proposed to extract liver and tumor regions from CT images more efficiently, which includes the steps of multiple organs coarse segmentation, fine segmentation, and liver tumors segmentation. Benefit from the previous knowledge of the shape and its deformation, the Statistical shape model (SSM) method is firstly utilized to segment multiple organs regions robustly. In the process of building an SSM, the correspondence of landmarks is crucial to the quality of the model. To generate a more representative prototype of organ surface, a k-mean clustering method is proposed. The quality of the SSMs, which is measured by generalization ability, specificity, and compactness, was improved. We furtherly extend the shapes correspondence to multiple objects. A non-rigid iterative closest point surface registration process is proposed to seek more properly corresponded landmarks across the multi-organ surfaces. The accuracy of surface registration was improved as well as the model quality. Moreover, to localize the abdominal organs simultaneously, we proposed a random forest regressor cooperating intensity features to predict the position of multiple organs in the CT image. The regions of the organs are substantially restrained using the trained shape models. The accuracy of coarse segmentation using SSMs was increased by the initial information of organ positions.Consequently, a pixel-wise segmentation using the classification of supervoxels is applied for the fine segmentation of multiple organs. The intensity and spatial features are extracted from each supervoxels and classified by a trained random forest. The boundary of the supervoxels is closer to the real organs than the previous coarse segmentation. Finally, we developed a hybrid framework for liver tumor segmentation in multiphase images. To deal with these issues of distinguishing and delineating tumor regions and peripheral tissues, this task is accomplished in two steps: a cascade region-based convolutional neural network (R-CNN) with a refined head is trained to locate the bounding boxes that contain tumors, and a phase-sensitive noise filtering is introduced to refine the following segmentation of tumor regions conducted by a level-set-based framework. The results of tumor detection show the adjacent tumors are successfully separated by the improved cascaded R-CNN. The accuracy of tumor segmentation is also improved by our proposed method. 26 cases of multi-phase CT images were used to validate our proposed method for the segmentation of liver tumors. The average precision and recall rates for tumor detection are 76.8% and 84.4%, respectively. The intersection over union, true positive rate, and false positive rate for tumor segmentation are 72.7%, 76.2%, and 4.75%, respectively.九州工業大学博士学位論文 学位記番号: 工博甲第546号 学位授与年月日: 令和4年3月25日1 Introduction|2 Literature Review|3 Statistical Shape Model Building|4 Multi-organ Segmentation|5 Liver Tumors Segmentation|6 Summary and Outlook九州工業大学令和3年
    corecore