950 research outputs found

    A Survey on Ear Biometrics

    No full text
    Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though, current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion; earprint forensics; ear symmetry; ear classification; and ear individuality. This paper provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers

    Scene Segmentation and Object Classification for Place Recognition

    Get PDF
    This dissertation tries to solve the place recognition and loop closing problem in a way similar to human visual system. First, a novel image segmentation algorithm is developed. The image segmentation algorithm is based on a Perceptual Organization model, which allows the image segmentation algorithm to ‘perceive’ the special structural relations among the constituent parts of an unknown object and hence to group them together without object-specific knowledge. Then a new object recognition method is developed. Based on the fairly accurate segmentations generated by the image segmentation algorithm, an informative object description that includes not only the appearance (colors and textures), but also the parts layout and shape information is built. Then a novel feature selection algorithm is developed. The feature selection method can select a subset of features that best describes the characteristics of an object class. Classifiers trained with the selected features can classify objects with high accuracy. In next step, a subset of the salient objects in a scene is selected as landmark objects to label the place. The landmark objects are highly distinctive and widely visible. Each landmark object is represented by a list of SIFT descriptors extracted from the object surface. This object representation allows us to reliably recognize an object under certain viewpoint changes. To achieve efficient scene-matching, an indexing structure is developed. Both texture feature and color feature of objects are used as indexing features. The texture feature and the color feature are viewpoint-invariant and hence can be used to effectively find the candidate objects with similar surface characteristics to a query object. Experimental results show that the object-based place recognition and loop detection method can efficiently recognize a place in a large complex outdoor environment

    Reconhecimento automático de moedas medievais usando visão por computador

    Get PDF
    Dissertação de mestrado em Engenharia InformáticaThe use of computer vision for identification and recognition of coins is well studied and of renowned interest. However the focus of research has consistently been on modern coins and the used algorithms present quite disappointing results when applied to ancient coins. This discrepancy is explained by the nature of ancient coins that are manually minted, having plenty variances, failures, ripples and centuries of degradation which further deform the characteristic patterns, making their identification a hard task even for humans. Another noteworthy factor in almost all similar studies is the controlled environments and uniform illumination of all images of the datasets. Though it makes sense to focus on the more problematic variables, this is an impossible premise to find outside the researchers’ laboratory, therefore a problematic that must be approached. This dissertation focuses on medieval and ancient coin recognition in uncontrolled “real world” images, thus trying to pave way to the use of vast repositories of coin images all over the internet that could be used to make our algorithms more robust. The first part of the dissertation proposes a fast and automatic method to segment ancient coins over complex backgrounds using a Histogram Backprojection approach combined with edge detection methods. Results are compared against an automation of GrabCut algorithm. The proposed method achieves a Good or Acceptable rate on 76% of the images, taking an average of 0.29s per image, against 49% in 19.58s for GrabCut. Although this work is oriented to ancient coin segmentation, the method can also be used in other contexts presenting thin objects with uniform colors. In the second part, several state of the art machine learning algorithms are compared in the search for the most promising approach to classify these challenging coins. The best results are achieved using dense SIFT descriptors organized into Bags of Visual Words, and using Support Vector Machine or Naïve Bayes as machine learning strategies.O uso de visão por computador para identificação e reconhecimento de moedas é bastante estudado e de reconhecido interesse. No entanto o foco da investigação tem sido sistematicamente sobre as moedas modernas e os algoritmos usados apresentam resultados bastante desapontantes quando aplicados a moedas antigas. Esta discrepância é justificada pela natureza das moedas antigas que, sendo cunhadas à mão, apresentam bastantes variações, falhas e séculos de degradação que deformam os padrões característicos, tornando a sua identificação dificil mesmo para o ser humano. Adicionalmente, a quase totalidade dos estudos usa ambientes controlados e iluminação uniformizada entre todas as imagens dos datasets. Embora faça sentido focar-se nas variáveis mais problemáticas, esta é uma premissa impossível de encontrar fora do laboratório do investigador e portanto uma problemática que tem que ser estudada. Esta dissertação foca-se no reconhecimento de moedas medievais e clássicas em imagens não controladas, tentando assim abrir caminho ao uso de vastos repositórios de imagens de moedas disponíveis na internet, que poderiam ser usados para tornar os nossos algoritmos mais robustos. Na primeira parte é proposto um método rápido e automático para segmentar moedas antigas sobre fundos complexos, numa abordagem que envolve Histogram Backprojection combinado com deteção de arestas. Os resultados são comparados com uma automação do algoritmo GrabCut. O método proposto obtém uma classificação de Bom ou Aceitável em 76% das imagens, demorando uma média de 0.29s por imagem, contra 49% em 19,58s do GrabCut. Não obstante o foco em segmentação de moedas antigas, este método pode ser usado noutros contextos que incluam objetos planos de cor uniforme. Na segunda parte, o estado da arte de Machine Learning é testado e comparado em busca da abordagem mais promissora para classificar estas moedas. Os melhores resultados são alcançados usando descritores dense SIFT, organizados em Bags of Visual Words e usando Support Vector Machine ou Naive Bayes como estratégias de machine learning

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Kernel and Classifier Level Fusion for Image Classification.

    Get PDF
    Automatic understanding of visual information is one of the main requirements for a complete artificial intelligence system and an essential component of autonomous robots. State-of-the-art image recognition approaches are based on different local descriptors, each capturing some properties of the image such as intensity, color and texture. Each set of local descriptors is represented by a codebook and gives rise to a separate feature channel. For classification the feature channels are combined by using multiple kernel learning (MKL), early fusion or classifier level fusion approaches. Due to the importance of complementary information in fusion techniques, there is an increasing demand for diverse feature channels. The first part of the thesis focuses on the ways to encode information from images that is complementary to the state-of-the-art local features. To address this issue we present a novel image representation which can encode the structure of an object and propose three descriptors based on this representation. In the state-of-the-art recognition system the kernels are often computed independently of each other and thus may be highly informative yet redundant. Proper selection and fusion of the kernels is, therefore, crucial to maximize the performance and to address the efficiency issues in visual recognition applications. We address this issue in second part of the thesis where, we propose novel techniques to fuse feature channels for object and pattern recognition. We present an extensive evaluation of the fusion methods on four object recognition datasets and achieve state-of-the-art results on all of them. We also present results on four bioinformatics datasets to demonstrate that the proposed fusion methods work for a variety of pattern recognition problems, provided that we have multiple feature channels

    3D Object Registration and Recognition using Range Images

    Get PDF

    Detection and localization of specular surfaces using image motion cues

    Get PDF
    Cataloged from PDF version of article.Successful identification of specularities in an image can be crucial for an artificial vision system when extracting the semantic content of an image or while interacting with the environment. We developed an algorithm that relies on scale and rotation invariant feature extraction techniques and uses motion cues to detect and localize specular surfaces. Appearance change in feature vectors is used to quantify the appearance distortion on specular surfaces, which has previously been shown to be a powerful indicator for specularity (Doerschner et al. in Curr Biol, 2011). The algorithm combines epipolar deviations (Swaminathan et al. in Lect Notes Comput Sci 2350:508-523, 2002) and appearance distortion, and succeeds in localizing specular objects in computer-rendered and real scenes, across a wide range of camera motions and speeds, object sizes and shapes, and performs well under image noise and blur conditions. © 2014 Springer-Verlag Berlin Heidelberg
    corecore