564 research outputs found

    Computational processing and analysis of ear images

    Get PDF
    Tese de mestrado. Engenharia Biomédica. Faculdade de Engenharia. Universidade do Porto. 201

    Robotic-assisted approaches for image-controlled ultrasound procedures

    Get PDF
    Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), Universidade de Lisboa, Faculdade de Ciências, 2019A aquisição de imagens de ultrassons (US) é atualmente uma das modalidades de aquisição de imagem mais implementadas no meio médico por diversas razões. Quando comparada a outras modalidades como a tomografia computorizada (CT) e ressonância magnética (MRI), a combinação da sua portabilidade e baixo custo com a possibilidade de adquirir imagens em tempo real resulta numa enorme flexibilidade no que diz respeito às suas aplicações em medicina. Estas aplicações estendem-se desde o simples diagnóstico em ginecologia e obstetrícia, até tarefas que requerem alta precisão como cirurgia guiada por imagem ou mesmo em oncologia na área da braquiterapia. No entanto ao contrário das suas contrapartes devido à natureza do princípio físico da qual decorrem as imagens, a sua qualidade de imagem é altamente dependente da destreza do utilizador para colocar e orientar a sonda de US na região de interesse (ROI) correta, bem como, na sua capacidade de interpretar as imagens obtidas e localizar espacialmente as estruturas no corpo do paciente. De modo para tornar os procedimentos de diagnóstico menos propensos a erros, bem como os procedimentos guiados por imagem mais precisos, o acoplamento desta modalidade de imagem com uma abordagem robótica com controlo baseado na imagem adquirida é cada vez mais comum. Isto permite criar sistemas de diagnóstico e terapia semiautónomos, completamente autónomos ou cooperativos com o seu utilizador. Esta é uma tarefa que requer conhecimento e recursos de múltiplas áreas de conhecimento, incluindo de visão por computador, processamento de imagem e teoria de controlo. Em abordagens deste tipo a sonda de US vai agir como câmara para o interior do corpo do paciente e o processo de controlo vai basear-se em parâmetros tais como, as informações espaciais de uma certa estrutura-alvo presente na imagem adquirida. Estas informações que são extraídos através de vários estágios de processamento de imagem são utilizadas como realimentação no ciclo de controlo do sistema robótico em questão. A extração de informação espacial e controlo devem ser o mais autónomos e céleres possível, de modo a conseguir produzir-se um sistema com a capacidade de atuar em situações que requerem resposta em tempo real. Assim, o objetivo deste projeto foi desenvolver, implementar e validar, em MATLAB, as bases de uma abordagem para o controlo semiautónomo baseado em imagens de um sistema robótico de US e que possibilite o rastreio de estruturas-alvo e a automação de procedimentos de diagnóstico gerais com esta modalidade de imagem. De modo a atingir este objetivo foi assim implementada nesta plataforma, um programa semiautónomo com a capacidade de rastrear contornos em imagens US e capaz de produzir informação relativamente à sua posição e orientação na imagem. Este programa foi desenhado para ser compatível com uma abordagem em tempo real utilizando um sistema de aquisição SONOSITE TITAN, cuja velocidade de aquisição de imagem é de 25 fps. Este programa depende de fortemente de conceitos integrados na área de visão por computador, como computação de momentos e contornos ativos, sendo este último o motor principal da ferramenta de rastreamento. De um modo geral este programa pode ser descrito como uma implementação para rastreamento de contornos baseada em contornos ativos. Este tipo de contornos beneficia de um modelo físico subjacente que o permite ser atraído e convergir para determinadas características da imagem, como linhas, fronteiras, cantos ou regiões específicas, decorrente da minimização de um funcional de energia definido para a sua fronteira. De modo a simplificar e tornar mais célere a sua implementação este modelo dinâmico recorreu à parametrização dos contornos com funções harmónicas, pelo que as suas variáveis de sistema são descritoras de Fourier. Ao basear-se no princípio de menor energia o sistema pode ser encaixado na formulação da mecânica de Euler-Lagrange para sistemas físicos e a partir desta podem extrair-se sistemas de equações diferenciais que descrevem a evolução de um contorno ao longo do tempo. Esta evolução dependente não só da energia interna do contorno em sim, devido às forças de tensão e coesão entre pontos, mas também de forças externas que o vão guiar na imagem. Estas forças externas são determinadas de acordo com a finalidade do contorno e são geralmente derivadas de informação presente na imagem, como intensidades, gradientes e derivadas de ordem superior. Por fim, este sistema é implementado utilizando um método explícito de Euler que nos permite obter uma discretização do sistema em questão e nos proporciona uma expressão iterativa para a evolução do sistema de um estado prévio para um estado futuro que tem em conta os efeitos externos da imagem. Depois de ser implementado o desempenho do programa semiautomático de rastreamento foi validado. Esta validação concentrou-se em duas vertentes: na vertente da robustez do rastreio de contornos quando acoplado a uma sonda de US e na vertente da eficiência temporal do programa e da sua compatibilidade com sistemas de aquisição de imagem em tempo real. Antes de se proceder com a validação este sistema de aquisição foi primeiro calibrado espacialmente de forma simples, utilizando um fantoma de cabos em N contruído em acrílico capaz de produzir padrões reconhecíveis na imagem de ultrassons. Foram utilizados padrões verticais, horizontais e diagonais para calibrar a imagem, para os quais se consegue concluir que os dois primeiros produzem melhores valores para os espaçamentos reais entre pixéis da imagem de US. Finalmente a robustez do programa foi testada utilizando fantomas de 5%(m/m) de agar-agar incrustados com estruturas hipoecogénicas, simuladas por balões de água, construídos especialmente para este propósito. Para este tipo de montagem o programa consegue demonstrar uma estabilidade e robustez satisfatórias para diversos movimentos de translação e rotação da sonda US dentro do plano da imagem e mostrando também resultados promissores de resposta ao alongamento de estruturas, decorrentes de movimentos da sonda de US fora do plano da imagem. A validação da performance temporal do programa foi feita com este a funcionar a solo utilizando vídeos adquiridos na fase anterior para modelos de contornos ativos com diferentes níveis de detalhe. O tempo de computação do algoritmo em cada imagem do vídeo foi medido e a sua média foi calculada. Este valor encontra-se dentro dos níveis previstos, sendo facilmente compatível com a montagem da atual da sonda, cuja taxa de aquisição é 25 fps, atingindo a solo valores na gama entre 40 e 50 fps. Apesar demonstrar uma performance temporal e robustez promissoras esta abordagem possui ainda alguns limites para os quais a ainda não possui solução. Estes limites incluem: o suporte para um sistema rastreamento de contornos múltiplos e em simultâneo para estruturas-alvo mais complexas; a deteção e resolução de eventos topológicos dos contornos, como a fusão, separação e auto-interseção de contornos; a adaptabilidade automática dos parâmetros do sistema de equações para diferentes níveis de ruido da imagem e finalmente a especificidade dos potenciais da imagem para a convergência da abordagem em regiões da imagem que codifiquem tipo de tecidos específicos. Mesmo podendo beneficiar de algumas melhorias este projeto conseguiu atingir o objetivo a que se propôs, proporcionando uma implementação eficiente e robusta para um programa de rastreamento de contornos, permitindo lançar as bases nas quais vai ser futuramente possível trabalhar para finalmente atingir um sistema autónomo de diagnóstico em US. Além disso também demonstrou a utilidade de uma abordagem de contornos ativos para a construção de algoritmos de rastreamento robustos aos movimentos de estruturas-alvo no a imagem e com compatibilidade para abordagens em tempo-real.Ultrasound (US) systems are very popular in the medical field for several reasons. Compared to other imaging techniques such as CT or MRI, the combination of low-priced and portable hardware with realtime image acquisition enables great flexibility regarding medical applications, from simple diagnostics tasks to high precision ones, including those with robotic assistance. Unlike other techniques, the image quality and procedure accuracy are highly dependent on user skills for spatial ultrasound probe positioning and orientation around a region of interest (ROI) for inspection. To make diagnostics less prone to error and guided procedures more precise, and consequently safer, the US approach can be coupled to a robotic system. The probe acts as a camera to the patient body and relevant imaging information can be used to control a robotic arm, enabling the creation of semi-autonomous, cooperative and possibly fully autonomous diagnostics and therapeutics. In this project our aim is to develop a semi-autonomous tool for tracking defined structures of interest within US images, that outputs meaningful spatial information of a target structure (location of the centre of mass [CM], main orientation and elongation). Such tool must accomplish real-time requirements for future use in autonomous image-guided robotic systems. To this end, the concepts of moment-based visual servoing and active contours are fundamental. Active contours possess an underlying physical model allowing deformation according to image information, such as edges, image regions and specific image features. Additionally, the mathematical framework of vision-based control enables us to establish the types of necessary information for controlling a future autonomous system and how such information can be transformed to specify a desired task. Once implemented in MATLAB the tracking and temporal performance of this approach is tested in built agar-agar phantoms embedded with water-filled balloons, for stability demonstration, probe motion robustness in translational and rotational movements, as well as promising capability in responding to target structure deformations. The developed framework is also inside the expected levels, being compatible with a 25 frames per second image acquisition setup. The framework also has a standalone tool capable of dealing with 50 fps. Thus, this work lays the foundation for US guided procedures compatible with real-time approaches in moving and deforming targets

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion

    ADVANCED INTRAOPERATIVE IMAGE REGISTRATION FOR PLANNING AND GUIDANCE OF ROBOT-ASSISTED SURGERY

    Get PDF
    Robot-assisted surgery offers improved accuracy, precision, safety, and workflow for a variety of surgical procedures spanning different surgical contexts (e.g., neurosurgery, pulmonary interventions, orthopaedics). These systems can assist with implant placement, drilling, bone resection, and biopsy while reducing human errors (e.g., hand tremors and limited dexterity) and easing the workflow of such tasks. Furthermore, such systems can reduce radiation dose to the clinician in fluoroscopically-guided procedures since many robots can perform their task in the imaging field-of-view (FOV) without the surgeon. Robot-assisted surgery requires (1) a preoperative plan defined relative to the patient that instructs the robot to perform a task, (2) intraoperative registration of the patient to transform the planning data into the intraoperative space, and (3) intraoperative registration of the robot to the patient to guide the robot to execute the plan. However, despite the operational improvements achieved using robot-assisted surgery, there are geometric inaccuracies and significant challenges to workflow associated with (1-3) that impact widespread adoption. This thesis aims to address these challenges by using image registration to plan and guide robot- assisted surgical (RAS) systems to encourage greater adoption of robotic-assistance across surgical contexts (in this work, spinal neurosurgery, pulmonary interventions, and orthopaedic trauma). The proposed methods will also be compatible with diverse imaging and robotic platforms (including low-cost systems) to improve the accessibility of RAS systems for a wide range of hospital and use settings. This dissertation advances important components of image-guided, robot-assisted surgery, including: (1) automatic target planning using statistical models and surgeon-specific atlases for application in spinal neurosurgery; (2) intraoperative registration and guidance of a robot to the planning data using 3D-2D image registration (i.e., an “image-guided robot”) for assisting pelvic orthopaedic trauma; (3) advanced methods for intraoperative registration of planning data in deformable anatomy for guiding pulmonary interventions; and (4) extension of image-guided robotics in a piecewise rigid, multi-body context in which the robot directly manipulates anatomy for assisting ankle orthopaedic trauma

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures

    Doctor of Philosophy

    Get PDF
    dissertationCongenital heart defects are classes of birth defects that affect the structure and function of the heart. These defects are attributed to the abnormal or incomplete development of a fetal heart during the first few weeks following conception. The overall detection rate of congenital heart defects during routine prenatal examination is low. This is attributed to the insufficient number of trained personnel in many local health centers where many cases of congenital heart defects go undetected. This dissertation presents a system to identify congenital heart defects to improve pregnancy outcomes and increase their detection rates. The system was developed and its performance assessed in identifying the presence of ventricular defects (congenital heart defects that affect the size of the ventricles) using four-dimensional fetal chocardiographic images. The designed system consists of three components: 1) a fetal heart location estimation component, 2) a fetal heart chamber segmentation component, and 3) a detection component that detects congenital heart defects from the segmented chambers. The location estimation component is used to isolate a fetal heart in any four-dimensional fetal echocardiographic image. It uses a hybrid region of interest extraction method that is robust to speckle noise degradation inherent in all ultrasound images. The location estimation method's performance was analyzed on 130 four-dimensional fetal echocardiographic images by comparison with manually identified fetal heart region of interest. The location estimation method showed good agreement with the manually identified standard using four quantitative indexes: Jaccard index, Sørenson-Dice index, Sensitivity index and Specificity index. The average values of these indexes were measured at 80.70%, 89.19%, 91.04%, and 99.17%, respectively. The fetal heart chamber segmentation component uses velocity vector field estimates computed on frames contained in a four-dimensional image to identify the fetal heart chambers. The velocity vector fields are computed using a histogram-based optical flow technique which is formulated on local image characteristics to reduces the effect of speckle noise and nonuniform echogenicity on the velocity vector field estimates. Features based on the velocity vector field estimates, voxel brightness/intensity values, and voxel Cartesian coordinate positions were extracted and used with kernel k-means algorithm to identify the individual chambers. The segmentation method's performance was evaluated on 130 images from 31 patients by comparing the segmentation results with manually identified fetal heart chambers. Evaluation was based on the Sørenson-Dice index, the absolute volume difference and the Hausdorff distance, with each resulting in per patient average values of 69.92%, 22.08%, and 2.82 mm, respectively. The detection component uses the volumes of the identified fetal heart chambers to flag the possible occurrence of hypoplastic left heart syndrome, a type of congenital heart defect. An empirical volume threshold defined on the relative ratio of adjacent fetal heart chamber volumes obtained manually is used in the detection process. The performance of the detection procedure was assessed by comparison with a set of images with confirmed diagnosis of hypoplastic left heart syndrome and a control group of normal fetal hearts. Of the 130 images considered 18 of 20 (90%) fetal hearts were correctly detected as having hypoplastic left heart syndrome and 84 of 110 (76.36%) fetal hearts were correctly detected as normal in the control group. The results show that the detection system performs better than the overall detection rate for congenital heart defect which is reported to be between 30% and 60%

    Prostate Segmentation and Regions of Interest Detection in Transrectal Ultrasound Images

    Get PDF
    The early detection of prostate cancer plays a significant role in the success of treatment and outcome. To detect prostate cancer, imaging modalities such as TransRectal UltraSound (TRUS) and Magnetic Resonance Imaging (MRI) are relied on. MRI images are more comprehensible than TRUS images which are corrupted by noise such as speckles and shadowing. However, MRI screening is costly, often unavailable in many community hospitals, time consuming, and requires more patient preparation time. Therefore, TRUS is more popular for screening and biopsy guidance for prostate cancer. For these reasons, TRUS images are chosen in this research. Radiologists first segment the prostate image from ultrasound image and then identify the hypoechoic regions which are more likely to exhibit cancer and should be considered for biopsy. In this thesis, the focus is on prostate segmentation and on Regions of Interest (ROI)segmentation. First, the extraneous tissues surrounding the prostate gland are eliminated. Consequently, the process of detecting the cancerous regions is focused on the prostate gland only. Thus, the diagnosing process is significantly shortened. Also, segmentation techniques such as thresholding, region growing, classification, clustering, Markov random field models, artificial neural networks (ANNs), atlas-guided, and deformable models are investigated. In this dissertation, the deformable model technique is selected because it is capable of segmenting difficult images such as ultrasound images. Deformable models are classified as either parametric or geometric deformable models. For the prostate segmentation, one of the parametric deformable models, Gradient Vector Flow (GVF) deformable contour, is adopted because it is capable of segmenting the prostate gland, even if the initial contour is not close to the prostate boundary. The manual segmentation of ultrasound images not only consumes much time and effort, but also leads to operator-dependent results. Therefore, a fully automatic prostate segmentation algorithm is proposed based on knowledge-based rules. The new algorithm results are evaluated with respect to their manual outlining by using distance-based and area-based metrics. Also, the novel technique is compared with two well-known semi-automatic algorithms to illustrate its superiority. With hypothesis testing, the proposed algorithm is statistically superior to the other two algorithms. The newly developed algorithm is operator-independent and capable of accurately segmenting a prostate gland with any shape and orientation from the ultrasound image. The focus of the second part of the research is to locate the regions which are more prone to cancer. Although the parametric dynamic contour technique can readily segment a single region, it is not conducive for segmenting multiple regions, as required in the regions of interest (ROI) segmentation part. Since the number of regions is not known beforehand, the problem is stated as 3D one by using level set approach to handle the topology changes such as splitting and merging the contours. For the proposed ROI segmentation algorithm, one of the geometric deformable models, active contours without edges, is used. This technique is capable of segmenting the regions with either weak edges, or even, no edges at all. The results of the proposed ROI segmentation algorithm are compared with those of the two experts' manual marking. The results are also compared with the common regions manually marked by both experts and with the total regions marked by either expert. The proposed ROI segmentation algorithm is also evaluated by using region-based and pixel-based strategies. The evaluation results indicate that the proposed algorithm produces similar results to those of the experts' manual markings, but with the added advantages of being fast and reliable. This novel algorithm also detects some regions that have been missed by one expert but confirmed by the other. In conclusion, the two newly devised algorithms can assist experts in segmenting the prostate image and detecting the suspicious abnormal regions that should be considered for biopsy. This leads to the reduction the number of biopsies, early detection of the diseased regions, proper management, and possible reduction of death related to prostate cancer

    Segmentation of striatal brain structures from high resolution pet images

    Get PDF
    Dissertation presented at the Faculty of Science and Technology of the New University of Lisbon in fulfillment of the requirements for the Masters degree in Electrical Engineering and ComputersWe propose and evaluate fully automatic segmentation methods for the extraction of striatal brain surfaces (caudate, putamen, ventral striatum and white matter), from high resolution positron emission tomography (PET) images. In the preprocessing steps, both the right and the left striata were segmented from the high resolution PET images. This segmentation was achieved by delineating the brain surface, finding the plane that maximizes the reflective symmetry of the brain (mid-sagittal plane) and, finally, extracting the right and left striata from both hemisphere images. The delineation of the brain surface and the extraction of the striata were achieved using the DSM-OS (Surface Minimization – Outer Surface) algorithm. The segmentation of striatal brain surfaces from the striatal images can be separated into two sub-processes: the construction of a graph (named “voxel affinity matrix”) and the graph clustering. The voxel affinity matrix was built using a set of image features that accurately informs the clustering method on the relationship between image voxels. The features defining the similarity of pairwise voxels were spatial connectivity, intensity values, and Euclidean distances. The clustering process is treated as a graph partition problem using two methods, a spectral (multiway normalized cuts) and a non-spectral (weighted kernel k-means). The normalized cuts algorithm relies on the computation of the graph eigenvalues to partition the graph into connected regions. However, this method fails when applied to high resolution PET images due to the high computational requirements arising from the image size. On the other hand, the weighted kernel k-means classifies iteratively, with the aid of the image features, a given data set into a predefined number of clusters. The weighted kernel k-means and the normalized cuts algorithm are mathematically similar. After finding the optimal initial parameters for the weighted kernel k-means for this type of images, no further tuning is necessary for subsequent images. Our results showed that the putamen and ventral striatum were accurately segmented, while the caudate and white matter appeared to be merged in the same cluster. The putamen was divided in anterior and posterior areas. All the experiments resulted in the same type of segmentation, validating the reproducibility of our results

    A non-invasive image based system for early diagnosis of prostate cancer.

    Get PDF
    Prostate cancer is the second most fatal cancer experienced by American males. The average American male has a 16.15% chance of developing prostate cancer, which is 8.38% higher than lung cancer, the second most likely cancer. The current in-vitro techniques that are based on analyzing a patients blood and urine have several limitations concerning their accuracy. In addition, the prostate Specific Antigen (PSA) blood-based test, has a high chance of false positive diagnosis, ranging from 28%-58%. Yet, biopsy remains the gold standard for the assessment of prostate cancer, but only as the last resort because of its invasive nature, high cost, and potential morbidity rates. The major limitation of the relatively small needle biopsy samples is the higher possibility of producing false positive diagnosis. Moreover, the visual inspection system (e.g., Gleason grading system) is not quantitative technique and different observers may classify a sample differently, leading to discrepancies in the diagnosis. As reported in the literature that the early detection of prostate cancer is a crucial step for decreasing prostate cancer related deaths. Thus, there is an urgent need for developing objective, non-invasive image based technology for early detection of prostate cancer. The objective of this dissertation is to develop a computer vision methodology, later translated into a clinically usable software tool, which can improve sensitivity and specificity of early prostate cancer diagnosis based on the well-known hypothesis that malignant tumors are will connected with the blood vessels than the benign tumors. Therefore, using either Diffusion Weighted Magnetic Resonance imaging (DW-MRI) or Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI), we will be able to interrelate the amount of blood in the detected prostate tumors by estimating either the Apparent Diffusion Coefficient (ADC) in the prostate with the malignancy of the prostate tumor or perfusion parameters. We intend to validate this hypothesis by demonstrating that automatic segmentation of the prostate from either DW-MRI or DCE-MRI after handling its local motion, provides discriminatory features for early prostate cancer diagnosis. The proposed CAD system consists of three majors components, the first two of which constitute new research contributions to a challenging computer vision problem. The three main components are: (1) A novel Shape-based segmentation approach to segment the prostate from either low contrast DW-MRI or DCE-MRI data; (2) A novel iso-contours-based non-rigid registration approach to ensure that we have voxel-on-voxel matches of all data which may be more difficult due to gross patient motion, transmitted respiratory effects, and intrinsic and transmitted pulsatile effects; and (3) Probabilistic models for the estimated diffusion and perfusion features for both malignant and benign tumors. Our results showed a 98% classification accuracy using Leave-One-Subject-Out (LOSO) approach based on the estimated ADC for 30 patients (12 patients diagnosed as malignant; 18 diagnosed as benign). These results show the promise of the proposed image-based diagnostic technique as a supplement to current technologies for diagnosing prostate cancer
    corecore