159 research outputs found

    Segmentation of kidney and renal collecting system on 3D computed tomography images

    Get PDF
    Surgical training for minimal invasive kidney interventions (MIKI) has huge importance within the urology field. Within this topic, simulate MIKI in a patient-specific virtual environment can be used for pre-operative planning using the real patient's anatomy, possibly resulting in a reduction of intra-operative medical complications. However, the validated VR simulators perform the training in a group of standard models and do not allow patient-specific training. For a patient-specific training, the standard simulator would need to be adapted using personalized models, which can be extracted from pre-operative images using segmentation strategies. To date, several methods have already been proposed to accurately segment the kidney in computed tomography (CT) images. However, most of these works focused on kidney segmentation only, neglecting the extraction of its internal compartments. In this work, we propose to adapt a coupled formulation of the B-Spline Explicit Active Surfaces (BEAS) framework to simultaneously segment the kidney and the renal collecting system (CS) from CT images. Moreover, from the difference of both kidney and CS segmentations, one is able to extract the renal parenchyma also. The segmentation process is guided by a new energy functional that combines both gradient and region-based energies. The method was evaluated in 10 kidneys from 5 CT datasets, with different image properties. Overall, the results demonstrate the accuracy of the proposed strategy, with a Dice overlap of 92.5%, 86.9% and 63.5%, and a point-to-surface error around 1.6 mm, 1.9 mm and 4 mm for the kidney, renal parenchyma and CS, respectively.NORTE-01-0145-FEDER0000I3, and NORTE-01-0145-FEDER-024300, supported by Northern Portugal Regional Operational Programme (Norte2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (FEDER), and also been funded by FEDER funds, through Competitiveness Factors Operational Programme (COMPETE), and by national funds, through the FCT-Fundacao para a Ciência e Tecnologia, under the scope of the project POCI-01-0145-FEDER-007038. The authors acknowledge FCT-Fundação para a Ciância e a Tecnologia, Portugal, and the European Social Found, European Union, for funding support through the Programa Operacional Capital Humano (POCH).info:eu-repo/semantics/publishedVersio

    A non-invasive diagnostic system for early assessment of acute renal transplant rejection.

    Get PDF
    Early diagnosis of acute renal transplant rejection (ARTR) is of immense importance for appropriate therapeutic treatment administration. Although the current diagnostic technique is based on renal biopsy, it is not preferred due to its invasiveness, recovery time (1-2 weeks), and potential for complications, e.g., bleeding and/or infection. In this thesis, a computer-aided diagnostic (CAD) system for early detection of ARTR from 4D (3D + b-value) diffusion-weighted (DW) MRI data is developed. The CAD process starts from a 3D B-spline-based data alignment (to handle local deviations due to breathing and heart beat) and kidney tissue segmentation with an evolving geometric (level-set-based) deformable model. The latter is guided by a voxel-wise stochastic speed function, which follows from a joint kidney-background Markov-Gibbs random field model accounting for an adaptive kidney shape prior and for on-going visual kidney-background appearances. A cumulative empirical distribution of apparent diffusion coefficient (ADC) at different b-values of the segmented DW-MRI is considered a discriminatory transplant status feature. Finally, a classifier based on deep learning of a non-negative constrained stacked auto-encoder is employed to distinguish between rejected and non-rejected renal transplants. In the “leave-one-subject-out” experiments on 53 subjects, 98% of the subjects were correctly classified (namely, 36 out of 37 rejected transplants and 16 out of 16 nonrejected ones). Additionally, a four-fold cross-validation experiment was performed, and an average accuracy of 96% was obtained. These experimental results hold promise of the proposed CAD system as a reliable non-invasive diagnostic tool

    Analysis of contrast-enhanced medical images.

    Get PDF
    Early detection of human organ diseases is of great importance for the accurate diagnosis and institution of appropriate therapies. This can potentially prevent progression to end-stage disease by detecting precursors that evaluate organ functionality. In addition, it also assists the clinicians for therapy evaluation, tracking diseases progression, and surgery operations. Advances in functional and contrast-enhanced (CE) medical images enabled accurate noninvasive evaluation of organ functionality due to their ability to provide superior anatomical and functional information about the tissue-of-interest. The main objective of this dissertation is to develop a computer-aided diagnostic (CAD) system for analyzing complex data from CE magnetic resonance imaging (MRI). The developed CAD system has been tested in three case studies: (i) early detection of acute renal transplant rejection, (ii) evaluation of myocardial perfusion in patients with ischemic heart disease after heart attack; and (iii), early detection of prostate cancer. However, developing a noninvasive CAD system for the analysis of CE medical images is subject to multiple challenges, including, but are not limited to, image noise and inhomogeneity, nonlinear signal intensity changes of the images over the time course of data acquisition, appearances and shape changes (deformations) of the organ-of-interest during data acquisition, determination of the best features (indexes) that describe the perfusion of a contrast agent (CA) into the tissue. To address these challenges, this dissertation focuses on building new mathematical models and learning techniques that facilitate accurate analysis of CAs perfusion in living organs and include: (i) accurate mathematical models for the segmentation of the object-of-interest, which integrate object shape and appearance features in terms of pixel/voxel-wise image intensities and their spatial interactions; (ii) motion correction techniques that combine both global and local models, which exploit geometric features, rather than image intensities to avoid problems associated with nonlinear intensity variations of the CE images; (iii) fusion of multiple features using the genetic algorithm. The proposed techniques have been integrated into CAD systems that have been tested in, but not limited to, three clinical studies. First, a noninvasive CAD system is proposed for the early and accurate diagnosis of acute renal transplant rejection using dynamic contrast-enhanced MRI (DCE-MRI). Acute rejection–the immunological response of the human immune system to a foreign kidney–is the most sever cause of renal dysfunction among other diagnostic possibilities, including acute tubular necrosis and immune drug toxicity. In the U.S., approximately 17,736 renal transplants are performed annually, and given the limited number of donors, transplanted kidney salvage is an important medical concern. Thus far, biopsy remains the gold standard for the assessment of renal transplant dysfunction, but only as the last resort because of its invasive nature, high cost, and potential morbidity rates. The diagnostic results of the proposed CAD system, based on the analysis of 50 independent in-vivo cases were 96% with a 95% confidence interval. These results clearly demonstrate the promise of the proposed image-based diagnostic CAD system as a supplement to the current technologies, such as nuclear imaging and ultrasonography, to determine the type of kidney dysfunction. Second, a comprehensive CAD system is developed for the characterization of myocardial perfusion and clinical status in heart failure and novel myoregeneration therapy using cardiac first-pass MRI (FP-MRI). Heart failure is considered the most important cause of morbidity and mortality in cardiovascular disease, which affects approximately 6 million U.S. patients annually. Ischemic heart disease is considered the most common underlying cause of heart failure. Therefore, the detection of the heart failure in its earliest forms is essential to prevent its relentless progression to premature death. While current medical studies focus on detecting pathological tissue and assessing contractile function of the diseased heart, this dissertation address the key issue of the effects of the myoregeneration therapy on the associated blood nutrient supply. Quantitative and qualitative assessment in a cohort of 24 perfusion data sets demonstrated the ability of the proposed framework to reveal regional perfusion improvements with therapy, and transmural perfusion differences across the myocardial wall; thus, it can aid in follow-up on treatment for patients undergoing the myoregeneration therapy. Finally, an image-based CAD system for early detection of prostate cancer using DCE-MRI is introduced. Prostate cancer is the most frequently diagnosed malignancy among men and remains the second leading cause of cancer-related death in the USA with more than 238,000 new cases and a mortality rate of about 30,000 in 2013. Therefore, early diagnosis of prostate cancer can improve the effectiveness of treatment and increase the patient’s chance of survival. Currently, needle biopsy is the gold standard for the diagnosis of prostate cancer. However, it is an invasive procedure with high costs and potential morbidity rates. Additionally, it has a higher possibility of producing false positive diagnosis due to relatively small needle biopsy samples. Application of the proposed CAD yield promising results in a cohort of 30 patients that would, in the near future, represent a supplement of the current technologies to determine prostate cancer type. The developed techniques have been compared to the state-of-the-art methods and demonstrated higher accuracy as shown in this dissertation. The proposed models (higher-order spatial interaction models, shape models, motion correction models, and perfusion analysis models) can be used in many of today’s CAD applications for early detection of a variety of diseases and medical conditions, and are expected to notably amplify the accuracy of CAD decisions based on the automated analysis of CE images

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Sparse feature learning for image analysis in segmentation, classification, and disease diagnosis.

    Get PDF
    The success of machine learning algorithms generally depends on intermediate data representation, called features that disentangle the hidden factors of variation in data. Moreover, machine learning models are required to be generalized, in order to reduce the specificity or bias toward the training dataset. Unsupervised feature learning is useful in taking advantage of large amount of unlabeled data, which is available to capture these variations. However, learned features are required to capture variational patterns in data space. In this dissertation, unsupervised feature learning with sparsity is investigated for sparse and local feature extraction with application to lung segmentation, interpretable deep models, and Alzheimer\u27s disease classification. Nonnegative Matrix Factorization, Autoencoder and 3D Convolutional Autoencoder are used as architectures or models for unsupervised feature learning. They are investigated along with nonnegativity, sparsity and part-based representation constraints for generalized and transferable feature extraction

    A novel NMF-based DWI CAD framework for prostate cancer.

    Get PDF
    In this thesis, a computer aided diagnostic (CAD) framework for detecting prostate cancer in DWI data is proposed. The proposed CAD method consists of two frameworks that use nonnegative matrix factorization (NMF) to learn meaningful features from sets of high-dimensional data. The first technique, is a three dimensional (3D) level-set DWI prostate segmentation algorithm guided by a novel probabilistic speed function. This speed function is driven by the features learned by NMF from 3D appearance, shape, and spatial data. The second technique, is a probabilistic classifier that seeks to label a prostate segmented from DWI data as either alignat, contain cancer, or benign, containing no cancer. This approach uses a NMF-based feature fusion to create a feature space where data classes are clustered. In addition, using DWI data acquired at a wide range of b-values (i.e. magnetic field strengths) is investigated. Experimental analysis indicates that for both of these frameworks, using NMF producing more accurate segmentation and classification results, respectively, and that combining the information from DWI data at several b-values can assist in detecting prostate cancer

    A CAD system for early diagnosis of autism using different imaging modalities.

    Get PDF
    The term “autism spectrum disorder” (ASD) refers to a collection of neuro-developmental disorders that affect linguistic, behavioral, and social skills. Autism has many symptoms, most prominently, social impairment and repetitive behaviors. It is crucial to diagnose autism at an early stage for better assessment and investigation of this complex syndrome. There have been a lot of efforts to diagnose ASD using different techniques, such as imaging modalities, genetic techniques, and behavior reports. Imaging modalities have been extensively exploited for ASD diagnosis, and one of the most successful ones is Magnetic resonance imaging(MRI),where it has shown promise for the early diagnosis of the ASD related abnormalities in particular. Magnetic resonance imaging (MRI) modalities have emerged as powerful means that facilitate non-invasive clinical diagnostics of various diseases and abnormalities since their inception in the 1980s. After the advent in the nineteen eighties, MRI soon became one of the most promising non- invasive modalities for visualization and diagnostics of ASD-related abnormalities. Along with its main advantage of no exposure to radiation, high contrast, and spatial resolution, the recent advances to MRI modalities have notably increased diagnostic certainty. Multiple MRI modalities, such as different types of structural MRI (sMRI) that examines anatomical changes, and functional MRI (fMRI) that examines brain activity by monitoring blood flow changes,have been employed to investigate facets of ASD in order to better understand this complex syndrome. This work aims at developing a new computer-aided diagnostic (CAD) system for autism diagnosis using different imaging modalities. It mainly relies on making use of structural magnetic resonance images for extracting notable shape features from parts of the brainthat proved to correlate with ASD from previous neuropathological studies. Shape features from both the cerebral cortex (Cx) and cerebral white matter(CWM)are extracted. Fusion of features from these two structures is conducted based on the recent findings suggesting that Cx changes in autism are related to CWM abnormalities. Also, when fusing features from more than one structure, this would increase the robustness of the CAD system. Moreover, fMRI experiments are done and analyzed to find areas of activation in the brains of autistic and typically developing individuals that are related to a specific task. All sMRI findings are fused with those of fMRI to better understand ASD in terms of both anatomy and functionality,and thus better classify the two groups. This is one aspect of the novelty of this CAD system, where sMRI and fMRI studies are both applied on subjects from different ages to diagnose ASD. In order to build such a CAD system, three main blocks are required. First, 3D brain segmentation is applied using a novel hybrid model that combines shape, intensity, and spatial information. Second, shape features from both Cx and CWM are extracted and anf MRI reward experiment is conducted from which areas of activation that are related to the task of this experiment are identified. Those features were extracted from local areas of the brain to provide an accurate analysis of ASD and correlate it with certain anatomical areas. Third and last, fusion of all the extracted features is done using a deep-fusion classification network to perform classification and obtain the diagnosis report. Fusing features from all modalities achieved a classification accuracy of 94.7%, which emphasizes the significance of combining structures/modalities for ASD diagnosis. To conclude, this work could pave the pathway for better understanding of the autism spectrum by finding local areas that correlate to the disease. The idea of personalized medicine is emphasized in this work, where the proposed CAD system holds the promise to resolve autism endophenotypes and help clinicians deliver personalized treatment to individuals affected with this complex syndrome

    IMPROVING DAILY CLINICAL PRACTICE WITH ABDOMINAL PATIENT SPECIFIC 3D MODELS

    Get PDF
    This thesis proposes methods and procedures to proficiently introduce patient 3D models in the daily clinical practice for diagnosis and treatment of abdominal diseases. The objective of the work consists in providing and visualizing quantitative geometrical and topological information on the anatomy of interest, and to develop systems that allow to improve radiology and surgery. The 3D visualization drastically simplifies the interpretation process of medical images and provides benefits both in diagnosing and in surgical planning phases. Further advantages can be introduced registering virtual pre-operative information (3D models) with real intra-operative information (patient and surgical instruments). The surgeon can use mixed-reality systems that allow him/her to see covered structures before reaching them, surgical navigators for see the scene (anatomy and instruments) from different point of view and smart mechatronics devices, which, knowing the anatomy, assist him/her in an active way. All these aspects are useful in terms of safety, efficiency and financial resources for the physicians, for the patient and for the sanitary system too. The entire process, from volumetric radiological images acquisition up to the use of 3D anatomical models inside the surgical room, has been studied and specific applications have been developed. A segmentation procedure has been designed taking into account acquisition protocols commonly used in radiological departments, and a software tool, that allows to obtain efficient 3D models, have been implemented and tested. The alignment problem has been investigated examining the various sources of errors during the image acquisition, in the radiological department, and during to the execution of the intervention. A rigid body registration procedure compatible with the surgical environment has been defined and implemented. The procedure has been integrated in a surgical navigation system and is useful as starting initial registration for more accurate alignment methods based on deformable approaches. Monoscopic and stereoscopic 3D localization machine vision routines, using the laparoscopic and/or generic cameras images, have been implemented to obtain intra-operative information that can be used to model abdominal deformations. Further, the use of this information for fusion and registration purposes allows to enhance the potentialities of computer assisted surgery. In particular a precise alignment between virtual and real anatomies for mixed-reality purposes, and the development of tracker-free navigation systems, has been obtained elaborating video images and providing an analytical adaptation of the virtual camera to the real camera. Clinical tests, demonstrating the usability of the proposed solutions, are reported. Test results and appreciation of radiologists and surgeons, to the proposed prototypes, encourage their integration in the daily clinical practice and future developments
    corecore