12 research outputs found

    Active Contours and Image Segmentation: The Current State Of the Art

    Get PDF
    Image segmentation is a fundamental task in image analysis responsible for partitioning an image into multiple sub-regions based on a desired feature. Active contours have been widely used as attractive image segmentation methods because they always produce sub-regions with continuous boundaries, while the kernel-based edge detection methods, e.g. Sobel edge detectors, often produce discontinuous boundaries. The use of level set theory has provided more flexibility and convenience in the implementation of active contours. However, traditional edge-based active contour models have been applicable to only relatively simple images whose sub-regions are uniform without internal edges. Here in this paper we attempt to brief the taxonomy and current state of the art in Image segmentation and usage of Active Contours

    Automatic lumen segmentation in IVOCT images using binary morphological reconstruction

    Get PDF
    Abstract\ud \ud \ud \ud Background\ud Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses.\ud \ud \ud \ud Method\ud An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object.\ud \ud \ud \ud Results\ud The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1.\ud \ud \ud \ud Conclusions\ud In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation.São Paulo Research Foundation – Brazil ( FAPESP – Process Number: 2012/157212), National Council of Scientific and Technological Development, Brazil (CNPq), Heart Institute of São Paulo, Brazil (InCor), Biomedical Engineering Laboratory of the University of São Paulo, Brazil (LEBUSP). The unknown reviewers, who have made important contributions to this work.São Paulo Research Foundation – Brazil ( FAPESP – Process Number: 2012/15721-2), National Council of Scientific and Technological Development, Brazil (CNPq), Heart Institute of São Paulo, Brazil (InCor), Biomedical Engineering Laboratory of the University of São Paulo, Brazil (LEB-USP). The unknown reviewers, who have made important contributions to this work

    Intravascular Ultrasound

    Get PDF
    Intravascular ultrasound (IVUS) is a cardiovascular imaging technology using a specially designed catheter with a miniaturized ultrasound probe for the assessment of vascular anatomy with detailed visualization of arterial layers. Over the past two decades, this technology has developed into an indispensable tool for research and clinical practice in cardiovascular medicine, offering the opportunity to gather diagnostic information about the process of atherosclerosis in vivo, and to directly observe the effects of various interventions on the plaque and arterial wall. This book aims to give a comprehensive overview of this rapidly evolving technique from basic principles and instrumentation to research and clinical applications with future perspectives

    Diabetic retinopathy diagnosis through multi-agent approaches

    Get PDF
    Programa Doutoral em Engenharia BiomédicaDiabetic retinopathy has been revealed as a serious public health problem in occidental world, since it is the most common cause of vision impairment among people of working age. The early diagnosis and an adequate treatment can prevent loss of vision. Thus, a regular screening program to detect diabetic retinopathy in the early stages could be efficient for the prevention of blindness. Due to its characteristics, digital color fundus photographs have been the preferred eye examination method adopted in these programs. Nevertheless, due to the growing incidence of diabetes in population, ophthalmologists have to observe a huge number of images. Therefore, the development of computational tools that can assist the diagnosis is of major importance. Several works have been published in the recent past years for this purpose; but an automatic system for clinical practice has yet to come. In general, these algorithms are used to normalize, segment and extract information from images to be utilized by classifiers which aim to classify the regions of the fundus image. These methods are mostly based on global approaches that cannot be locally adapted to the image properties and therefore, none of them perform as needed because of fundus images complexity. This thesis focuses on the development of new tools based on multi-agent approaches, to assist the diabetic retinopathy early diagnosis. The fundus image automatic segmentation concerning the diabetic retinopathy diagnosis should comprise both pathological (dark and bright lesions) and anatomical features (optic disc, blood vessels and fovea). In that way, systems for the optic disc detection, bright lesions segmentation, blood vessels segmentation and dark lesions segmentation were implemented and, when possible, compared to those approaches already described in literature. Two kinds of agent based systems were investigated and applied to digital color fundus photographs: ant colony system and multi-agent system composed of reactive agents with interaction mechanisms between them. The ant colony system was used to the optic disc detection and for bright lesion segmentation. Multi-agent system models were developed for the blood vessel segmentation and for small dark lesion segmentation. The multi-agent system models created in this study are not image processing techniques on their own, but they are used as tools to improve the traditional algorithms results at the micro level. The results of all the proposed approaches are very promising and reveal that the systems created perform better than other recent methods described in the literature. Therefore, the main scientific contribution of this thesis is to prove that multi-agent systems based approaches can be efficient in segmenting structures in retinal images. Such an approach overcomes the classic image processing algorithms that are limited to macro results and do not consider the local characteristics of images. Hence, multi-agent systems based approaches could be a fundamental tool, responsible for a very efficient system development to be used in screening programs concerning diabetic retinopathy early diagnosis.A retinopatia diabética tem-se revelado como um problema sério de saúde pública no mundo ocidental, uma vez que é a principal causa de cegueira entre as pessoas em idade ativa. Contudo, a perda de visão pode ser prevenida através da deteção precoce da doença e de um tratamento adequado. Por isso, um programa regular de rastreio e monitorização da retinopatia diabética pode ser eficiente na prevenção da deterioração da visão. Devido às suas características, a fotografia digital colorida do fundo do olho tem sido o exame adotado neste tipo de programas. No entanto, devido ao aumento da incidência da diabetes na população, o número de imagens a serem analisadas pelos oftalmologistas é elevado. Assim sendo, é muito importante o desenvolvimento de ferramentas computacionais para auxiliar no diagnóstico desta patologia. Nos últimos anos, têm sido vários os trabalhos publicados com este propósito; porém, não existe ainda um sistema automático (ou recomendável) para ser usado nas práticas clínicas. No geral, estes algoritmos são usados para normalizar, segmentar e extrair informação das imagens que vai ser utilizada por classificadores, cujo objetivo é identificar as regiões da imagem que se procuram. Estes métodos são maioritariamente baseados em abordagens globais que não podem ser localmente adaptadas às propriedades das imagens e, portanto, nenhum apresenta a performance necessária devido à complexidade das imagens do fundo do olho. Esta tese foca-se no desenvolvimento de novas ferramentas computacionais baseadas em sistemas multi-agente, para auxiliar na deteção precoce da retinopatia diabética. A segmentação automática das imagens do fundo do olho com o objetivo de diagnosticar a retinopatia diabética, deve englobar características patológicas (lesões claras e escuras) e anatómicas (disco ótico, vasos sanguíneos e fóvea). Deste modo, foram criados sistemas para a deteção do disco ótico e para a segmentação das lesões claras, dos vasos sanguíneos e das lesões escuras e, quando possível, estes foram comparados com abordagens já descritas na literatura. Dois tipos de sistemas baseados em agentes foram investigados e aplicados nas imagens digitais coloridas do fundo do olho: sistema de colónia de formigas e sistema multi-agente constituído por agentes reativos e com mecanismos de interação entre eles. O sistema de colónia de formigas foi usado para a deteção do disco ótico e para a segmentação das lesões claras. Modelos de sistemas multi-agente foram desenvolvidos para a segmentação dos vasos sanguíneos e das lesões escuras. Os modelos multi-agentes criados ao longo deste estudo não são por si só técnicas de processamento de imagem, mas são sim usados como ferramentas para melhorar os resultados dos algoritmos tradicionais no baixo nível. Os resultados de todas as abordagens propostas são muito promissores e revelam que os sistemas criados apresentam melhor performance que outras abordagens recentes descritas na literatura. Posto isto, a maior contribuição científica desta tese é provar que abordagens baseadas em sistemas multi-agente podem ser eficientes na segmentação de estruturas em imagens da retina. Uma abordagem deste tipo ultrapassa os algoritmos clássicos de processamento de imagem, que se limitam aos resultados de alto nível e não têm em consideração as propriedades locais das imagens. Portanto, as abordagens baseadas em sistemas multi-agente podem ser uma ferramenta fundamental, responsável pelo desenvolvimento de um sistema eficiente para ser usado nos programas de rastreio e monitorização da retinopatia diabética.Work supported by FEDER funds through the "Programa Operacional Factores de Competitividade – COMPETE" and by national funds by FCT- Fundação para a Ciência e a Tecnologia. C. Pereira thanks the FCT for the SFRH / BD / 61829 / 2009 grant

    A feasibility study on the use of agent-based image recognition on a desktop computer for the purpose of quality control in a production environment

    Get PDF
    Thesis (M. Tech.) - Central University of Technology, Free State, 2006A multi-threaded, multi-agent image recognition software application called RecMaster has been developed specifically for the purpose of quality control in a production environment. This entails using the system as a monitor to identify invalid objects moving on a conveyor belt and to pass on the relevant information to an attached device, such as a robotic arm, which will remove the invalid object. The main purpose of developing this system was to prove that a desktop computer could run an image recognition system efficiently, without the need for high-end, high-cost, specialised computer hardware. The programme operates by assigning each agent a task in the recognition process and then waiting for resources to become available. Tasks related to edge detection, colour inversion, image binarisation and perimeter determination were assigned to individual agents. Each agent is loaded onto its own processing thread, with some of the agents delegating their subtasks to other processing threads. This enables the application to utilise the available system resources more efficiently. The application is very limited in its scope, as it requires a uniform image background as well as little to no variance in camera zoom levels and object to lens distance. This study focused solely on the development of the application software, and not on the setting up of the actual imaging hardware. The imaging device, on which the system was tested, was a web cam capable of a 640 x 480 resolution. As such, all image capture and processing was done on images with a horizontal resolution of 640 pixels and a vertical resolution of 480 pixels, so as not to distort image quality. The application locates objects on an image feed - which can be in the format of a still image, a video file or a camera feed - and compares these objects to a model of the object that was created previously. The coordinates of the object are calculated and translated into coordinates on the conveyor system. These coordinates are then passed on to an external recipient, such as a robotic arm, via a serial link. The system has been applied to the model of a DVD, and tested against a variety of similar and dissimilar objects to determine its accuracy. The tests were run on both an AMD- and Intel-based desktop computer system, with the results indicating that both systems are capable of efficiently running the application. On average, the AMD-based system tended to be 81% faster at matching objects in still images, and 100% faster at matching objects in moving images. The system made matches within an average time frame of 250 ms, making the process fast enough to be used on an actual conveyor system. On still images, the results showed an 87% success rate for the AMD-based system, and 73% for Intel. For moving images, however, both systems showed a 100% success rate

    Methodology for extensive evaluation of semiautomatic and interactive segmentation algorithms using simulated Interaction models

    Get PDF
    Performance of semiautomatic and interactive segmentation(SIS) algorithms are usually evaluated by employing a small number of human operators to segment the images. The human operators typically provide the approximate location of objects of interest and their boundaries in an interactive phase, which is followed by an automatic phase where the segmentation is performed under the constraints of the operator-provided guidance. The segmentation results produced from this small set of interactions do not represent the true capability and potential of the algorithm being evaluated. For example, due to inter-operator variability, human operators may make choices that may provide either overestimated or underestimated results. As well, their choices may not be realistic when compared to how the algorithm is used in the field, since interaction may be influenced by operator fatigue and lapses in judgement. Other drawbacks to using human operators to assess SIS algorithms, include: human error, the lack of available expert users, and the expense. A methodology for evaluating segmentation performance is proposed here which uses simulated Interaction models to programmatically generate large numbers of interactions to ensure the presence of interactions throughout the object region. These interactions are used to segment the objects of interest and the resulting segmentations are then analysed using statistical methods. The large number of interactions generated by simulated interaction models capture the variabilities existing in the set of user interactions by considering each and every pixel inside the entire region of the object as a potential location for an interaction to be placed with equal probability. Due to the practical limitation imposed by the enormous amount of computation for the enormous number of possible interactions, uniform sampling of interactions at regular intervals is used to generate the subset of all possible interactions which still can represent the diverse pattern of the entire set of interactions. Categorization of interactions into different groups, based on the position of the interaction inside the object region and texture properties of the image region where the interaction is located, provides the opportunity for fine-grained algorithm performance analysis based on these two criteria. Application of statistical hypothesis testing make the analysis more accurate, scientific and reliable in comparison to conventional evaluation of semiautomatic segmentation algorithms. The proposed methodology has been demonstrated by two case studies through implementation of seven different algorithms using three different types of interaction modes making a total of nine segmentation applications to assess the efficacy of the methodology. Application of this methodology has revealed in-depth, fine details about the performance of the segmentation algorithms which currently existing methods could not achieve due to the absence of a large, unbiased set of interactions. Practical application of the methodology for a number of algorithms and diverse interaction modes have shown its feasibility and generality for it to be established as an appropriate methodology. Development of this methodology to be used as a potential application for automatic evaluation of the performance of SIS algorithms looks very promising for users of image segmentation

    Forschungsbericht Universität Mannheim 2006 / 2007

    Full text link
    Sie erhalten darin zum einen zusammenfassende Darstellungen zu den Forschungsschwerpunkten und Forschungsprofilen der Universität und deren Entwicklung in der Forschung. Zum anderen gibt der Forschungsbericht einen Überblick über die Publikationen und Forschungsprojekte der Lehrstühle, Professuren und zentralen Forschungseinrichtungen. Diese werden ergänzt um Angaben zur Organisation von Forschungsveranstaltungen, der Mitwirkung in Forschungsausschüssen, einer Übersicht zu den für Forschungszwecke eingeworbenen Drittmitteln, zu den Promotionen und Habilitationen, zu Preisen und Ehrungen und zu Förderern der Universität Mannheim. Darin zeigt sich die Bandbreite und Vielseitigkeit der Forschungsaktivitäten und deren Erfolg auf nationaler und internationaler Ebene
    corecore