14 research outputs found

    Simultaneous Tracking of Multiple Objects Using Fast Level Set-Like Algorithm

    Get PDF
    A topological flexibility of implicit active contours is of great benefit, since it allows simultaneous detection of several objects without any a priori knowledge about their number and shapes. However, in tracking applications it is often required to keep desired objects mutually separated as well as allow each object to evolve itself, i.e., different objects cannot be merged together, but each object can split into several regions that can be merged again later in time. The former can be achieved by applying topology-preserving constraints exploiting either various repelling forces or the simple point concept from digital geometry, which brings, however, an indispensable increase in the execution time and also prevent the latter. In this paper, we propose more efficient and more flexible topology-preserving constraint based on a region indication function, that can be easily integrated into a fast level set-like algorithm [Maska, Matula, Danek, Kozubek, LNCS 6455, 2010] in order to obtain a fast and robust algorithm for simultaneous tracking of multiple objects. The potential of the modified algorithm is demonstrated on both synthetic and real image data

    Initial contour generation approach in level set methods for dental image segmentation

    Get PDF
    Segmentation is challenging process in medical images especially on dental x-ray images. Level set methods have effective result on medical and dental image segmentation. Initial Contour (IC) is the essential step in level set image segmentation methods due to start the efficient process. However, the main issue with IC is how to generate the automatic technique in order to reduce the human interaction and moreover, suitable IC to have accurate result. In this paper a new region-based technique for IC generation, is proposed to overcome this issue. The idea is to generate the most suitable IC since the manual initialization of the level set function surface is a well-known drawback for accurate segmentation which has dependency on selection of IC and wrong selection will affect the result. We have utilized the statistical and morphological information inside and outside the contour to establish a region-based map function. This function is able to find the suitable IC on images to perform by level set methods. Experiments on dental x-ray images demonstrate the robustness of segmentation process using proposed method even on noisy images and with weak boundary. Furthermore, computational cost of segmentation process will be reduced

    Morphological region-based initial contour algorithm for level set methods in image segmentation

    Get PDF
    Initial Contour (IC) is the essential step in level set image segmentation methods due to start the efficient process. However, the main issue with IC is how to generate the automatic technique in order to reduce the human interaction and moreover, suitable IC to have accurate result. In this paper a new technique which we called Morphological Region-Based Initial Contour (MRBIC), is proposed to overcome this issue. The idea is to generate the most suitable IC since the manual initialization of the level set function surface is a well-known drawback for accurate segmentation which has dependency on selection of IC and wrong selection will affect the result. We have utilized the statistical and morphological information inside and outside the contour to establish a region-based map function. This function is able to find the suitable IC on images to perform by level set methods. Experiments on synthetic and real images demonstrate the robustness of segmentation process using MRBIC method even on noisy images and with weak boundary. Furthermore, computational cost of segmentation process will be reduced using MRBIC

    Approche de suivi d'objet par courbes de niveau

    Get PDF
    - Beaucoup d'approches ont été développées pour résoudre le problème de suivi de cible, dont l'approche statistique basée sur la méthode des courbes de niveau. Deux des principaux problèmes de suivi de cible en temps réel sont le coût de calcul, et la robustesse de l'algorithme pour le suivi des objets déformables. Dans cet article nous présentons deux techniques pour résoudre ces deux problèmes. La première est une méthode rapide pour diminuer le coût de calcul de l'algorithme standard, basée sur l'utilisation du signe de la fonction de vitesse au lieu d'employer sa valeur, et l'utilisation d'une zone spécifique près de la courbe de niveau zéro. La deuxième technique c 'est quand nous calculons la force statistique en chaque point, nous considérons l'effet moyen des points voisins au lieu de l'effet du seul point traité

    Design and validation of Segment - freely available software for cardiovascular image analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format.</p> <p>Results</p> <p>Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page <url>http://segment.heiberg.se</url>.</p> <p>Conclusions</p> <p>Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.</p

    Improvements to Quantification Algorithms for Myocardial Infarction in CMR Images - Validation in Human and Animal Studies

    Get PDF
    Cardiac magnetic resonance (CMR) images are used to investigate the heart for medical and research purposes. By injecting a contrast substance into the patient, myocardial infarctions (heart attacks) can be visualized in CMR image sets consisting of a number of image slices at different levels of the heart. Analysis of these images can detect an infarction, delineate it and estimate its size. This information is then processed by physicians in order to make a diagnosis and decide the course of treatment. Manual delineations are time consuming and observer dependent, why an automated algorithm is desired. Previous work presents a validated automatic segmentation algorithm that calculates a threshold used to separate the healthy tissue pixels from the infarction pixels, based on a fixed number of standard deviations. Theoretically, it is known that algorithms based on standard deviations are likely to be influenced by noise. Therefore, the aim of this thesis was to investigate if other techniques could be used to compute a threshold that is less noise sensitive in both humans and animals. The study included 40 humans and 18 pigs. Two different techniques based on an Expectation-Maximization algorithm for threshold calculation was developed and implemented into the previous presented method. One implementation analyses each image slice separately (the slice method), and one takes all slices into account at once (the set method). The algorithms were evaluated by comparing computed infarction volume to volumes computed from manual delineations. Both algorithms show good agreement and low bias with the reference standard. The slice method yielded the best results on animal data with a high resolution. The set method yielded the best results in human CMR images, and it show an improved robustness for increasing noise levels. Both implementations show potential for fully automatic quantification of myocardial infarction

    Automatic segmentation in CMR - Development and validation of algorithms for left ventricular function, myocardium at risk and myocardial infarction

    Get PDF
    In this thesis four new algorithms are presented for automatic segmentation in cardiovascular magnetic resonance (CMR); automatic segmentation of the left ventricle, myocardial infarction, and myocardium at risk in two different image types. All four algorithms were implemented in freely available software for image analysis and were validated against reference delineations with a low bias and high regional agreement. CMR is the most accurate and reproducible method for assessment of left ventricular mass and volumes and reference standard for assessment of myocardial infarction. CMR is also validated against single photon emission computed tomography (SPECT) for assessment of myocardium at risk up to one week after acute myocardial infarction. However, the clinical standard for quantification of left ventricular mass and volumes is manual delineation which has been shown to have a large bias between observers from different sites and for myocardium at risk and myocardial infarction there is no clinical standard due to varying results shown for the previously suggested threshold methods. The new automatic algorithms were all based on intensity classification by Expectation Maximization (EM) and incorporation of a priori information specific for each application. Validation was performed in large cohorts of patients with regards to bias in clinical parameters and regional agreement as Dice Similarity Coefficient (DSC). Further, images with reference delineation of the left ventricle were made available for future benchmarking of left ventricular segmentation, and the new automatic algorithms for segmentation of myocardium at risk and myocardial infarction were directly compared to the previously suggested intensity threshold methods. Combining intensity classification by EM with a priori information as in the new automatic algorithms was shown superior to previous methods and specifically to the previously suggested threshold methods for myocardium at risk and myocardial infarction. Added value of using a priori information and intensity correction was shown significant measured by DSC even though not significant for bias. For the previously suggested methods of infarct quantification a poorer result was found in the new multi-center, multi-vendor patient data than in the original validation in animal studies or single center patient studies. Thus, the results in this thesis also show the importance ofusing both bias and DSC for validation and performing validation in images of representative quality as in multi-center, multi-vendor patient studies
    corecore