5 research outputs found

    Identification of Hemorrhages in Iris Using Hybrid Morphological Method

    Get PDF
    In the field of ophthalmology, hemorrhage is the term used more often because of increasing diabetic patients. It’s a challenge amidst the ophthalmologist to distinguish the hemorrhage from the blood vessels, these lands in various problems. In the past various techniques were employed for the detection of the hemorrhage but they were not so accurate and often encountered misclassification between hemorrhage and blood vessels. Precise detection and classification of hemorrhage and blood vessel is very important in the diagnosis of many problems. This paper depicts a mechanized procedure for recognizing hemorrhages in fundus pictures. The acknowledgment of hemorrhages is one of the critical factors in the early finish of diabetic retinopathy. The algorithm proceeds through several steps such as image enhancement, image subtraction, morphological operations such as image thresholding, image strengthening, image thinning, erosion, morphological closing, image complement to suppress blood vessels and to highlight the hemorrhage

    Apreensão e discretização de ambientes tangíveis em sistemas de realidade aumentada

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaA Realidade Aumentada (RA) caracteriza-se pela mistura de elementos virtuais no mundo real de forma interativa e em tempo real. O conceito de RA levanta uma ampla variedade de questões quanto à coerência visual entre os objetos reais e virtuais num ambiente. De forma a melhorar o processo de inclusão destes elementos no meio físico foram criadas várias técnicas e algoritmos de visão por computador que através do mapeamento de espaços físicos, extração de características e marcadores fiduciais de objetos, verificação, deteção, identificação, classificação, entre outros, permitem analisar e estruturar o conteúdo de uma cena. O maior desafio que se coloca com a realização desta proposta de dissertação encontra-se associado à forma como é extraída e processada a informação que conseguimos obter a partir dos sensores que complementam os dispositivos de RA hoje em dia, a fim de representar e compreender, da melhor forma possível, os ambientes que nos rodeiam e preparar um espaço apto para a introdução e apresentação de conteúdo virtual com a maior harmonia. Neste documento é possível encontrar o estado da arte relativo aos temas previamente citados a fim de explorar, melhorar e desenvolver novas técnicas e paradigmas para, a partir da informação dos sensores mais genéricos encontrados em muitas das tecnologias móveis e óculos de realidade aumentada mais atuais, extrair várias características do cenário e objetos envolventes em tempo real. O processamento e tratamento desta informação tem como objetivo final realizar o reconhecimento e compreensão da cena e objetos que se encontram no espaço que rodeia estes sensores. Em paralelo à realização desta proposta de dissertação, foi desenvolvida uma framework denominada “Tangible Environments in Augmented Reality Systems (TEARS)” com o objetivo de demonstrar tudo o que é discutido neste documento não só como algo para fins de investigação científica, mas também para utilização e apoio num projeto e protótipo realizado no âmbito da unidade curricular do 5ºAno do Mestrado Integrado em Engenharia Informática (MIEI) de Projeto em Engenharia Informática (PEI) e que apresenta o título: “Assistência Remota com Realidade Mista (ARRM)”.Augmented Reality (AR) is described as the mixing of virtual elements in the real world in an interactive way and in real time. The concept of AR raises many questions about the visual coherence between real and virtual objects in an environment. In order to improve the process of inclusion of these elements in the physical environment, a number of techniques and algorithms of computer vision have been created, which, through spatial mapping, extraction of characteristics and fiducial markers of objects, verification, detection, identification, classification, among others, allow us to analyse and structure the content of a scene. The greatest challenge with this dissertation proposal is associated to how information, that we can get from the sensors that complement the AR devices today, is extracted and processed to better represent and understand our surroundings and prepare a suitable space that allows the introduction and presentation of virtual content with the greatest harmony. In this document it is possible to find the state of art related to the before mentioned themes in order to explore, improve and develop new techniques and paradigms in a way that, from the information of the most generic sensors found in many of the most current mobile technologies and augmented reality glasses, we can extract various features of the scene and surrounding objects in real time. The stage of processing and treat this information has as its final goal the recognition and understanding of the scene and objects that are in the space that surrounds these sensors. In parallel to this dissertation proposal, a framework called "Tangible Environments in Augmented Reality Systems (TEARS)" was developed with the intention of demonstrating everything that is discussed in this document not only for scientific research purposes, but also for use and support in a project and prototype carried out within the scope of the curricular unit of the 5th year of the Integrated Master’s in Informatics Engineering (IMIE) named Informatics Engineering Project (IEP) and is titled: "Remote Assistance with Mixed Reality (RAMR)"

    Glossary of Computer Vision Terms in Connection to Information Fusion

    No full text
    This glossary intends to extend the Haralick-Shapiro glossary of computer vision terms in the direction of information fusion in image understanding

    Neural Network `Surgery': Transplantation of Hidden Units

    No full text
    We present a novel method to combine the knowledge of several neural networks by replacement of hidden units. Applying neural networks to digital image analysis, the underlying spatial structure of the image can be propagated into the network and used to visualize its weights (WV-diagrams). This visualization tool helps to interpret the behaviour of hidden units. We notice a process of specialization of certain hidden units, while others remain apparently useless. These units are cut out of one network and replaced by units taken from other networks trained for the same task using different parameters. We achieve better prediction accuracies for the new, combined network than for any of the two original ones. This constitutes a special kind of information fusion in image understanding

    Mapping the Retina By Information Fusion of Multiple Medical Datasets

    No full text
    This paper reports results from interdisciplinary research: A framework for the integration of multiple information in image analysis called `information fusion in image understanding' is applied to provide a new visualization scheme for diagnosis and treatment of the human retina. This framework deals with representations and processes at all levels of abstraction. It is used to represent anatomical and pathological knowledge, to extract significant features from the input channels, and to obtain a complex diagnosis by means of fusion. Each patient is examined in six steps using a Scanning Laser Ophthalmoscope (SLO) providing several spectral channels and aperture settings, as well as static scotometry to measure scotoma (areas with a loss of visual function). Feature extraction processes yield dark (fovea) and bright (leakage) blobs at several scales, clusters of scotoma measures, tubes (blood vessels), and circular areas (optic disc) in six different image descriptions. By affine ma..
    corecore