1,202 research outputs found

    What else does your biometric data reveal? A survey on soft biometrics

    Get PDF
    International audienceRecent research has explored the possibility of extracting ancillary information from primary biometric traits, viz., face, fingerprints, hand geometry and iris. This ancillary information includes personal attributes such as gender, age, ethnicity, hair color, height, weight, etc. Such attributes are known as soft biometrics and have applications in surveillance and indexing biometric databases. These attributes can be used in a fusion framework to improve the matching accuracy of a primary biometric system (e.g., fusing face with gender information), or can be used to generate qualitative descriptions of an individual (e.g., "young Asian female with dark eyes and brown hair"). The latter is particularly useful in bridging the semantic gap between human and machine descriptions of biometric data. In this paper, we provide an overview of soft biometrics and discuss some of the techniques that have been proposed to extract them from image and video data. We also introduce a taxonomy for organizing and classifying soft biometric attributes, and enumerate the strengths and limitations of these attributes in the context of an operational biometric system. Finally, we discuss open research problems in this field. This survey is intended for researchers and practitioners in the field of biometrics

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches

    Graph-based Facial Affect Analysis: A Review of Methods, Applications and Challenges

    Full text link
    Facial affect analysis (FAA) using visual signals is important in human-computer interaction. Early methods focus on extracting appearance and geometry features associated with human affects, while ignoring the latent semantic information among individual facial changes, leading to limited performance and generalization. Recent work attempts to establish a graph-based representation to model these semantic relationships and develop frameworks to leverage them for various FAA tasks. In this paper, we provide a comprehensive review of graph-based FAA, including the evolution of algorithms and their applications. First, the FAA background knowledge is introduced, especially on the role of the graph. We then discuss approaches that are widely used for graph-based affective representation in literature and show a trend towards graph construction. For the relational reasoning in graph-based FAA, existing studies are categorized according to their usage of traditional methods or deep models, with a special emphasis on the latest graph neural networks. Performance comparisons of the state-of-the-art graph-based FAA methods are also summarized. Finally, we discuss the challenges and potential directions. As far as we know, this is the first survey of graph-based FAA methods. Our findings can serve as a reference for future research in this field.Comment: 20 pages, 12 figures, 5 table

    Scale And Pose Invariant Real-time Face Detection And Tracking

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2008Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2008Bu çalışmada görüntü tabanlı en gözde ve en yeni yöntemlerden biri olan ve Adaboost algoritması, “Integral Görüntü” tekniği ve kaskat sınıflandırıcılara dayalı yöntem kullanılarak insan yüzünün bulunması ve izlenmesi gerçeklendi. Beş değişik poza (sol, sol+45°, ön yüz, sağ+45° ve sağ) ait insan yüzü bu yöntemle eğitildi. Ayrıca, kolay uygulanabilirliğinden ve gerçek zamanlı uygulamalardaki hızından dolayı, yüzün izlenmesi için CAMSHIFT algoritması kullanıldı. Görüntü işlemenin gerçek zamanlı uygulamalara kötü yöndeki etkisinden kaçınmak için paralel programlama gerçeklendi. Bunu sağlamak için iki iplikçik (ana ve çocuk) oluşturuldu. Çocuk iplikçik alınan görüntü çerçeveleri üzerinde yüzleri bulmaya çalışırken, ana iplikçik de gelen tüm görüntüleri çoçuk iplikçikten aldığı veriye göre işler ve bunu kullanıcı penceresine basar. Sonuç olarak, insan yüzlerini bulma ve izleme sistemi başarılı bi gerçeklendi ve üç farklı test kümesi ile bir video kümesindeki test sonuçlarına göre yüksek başarım oranı sağladığı görüldü.In this study, one of the most popular and recent appearance based face detection method used which is a combination of Adaboost algorithm, Integral Image and cascading classifiers. Faces are trained for five different poses (left, left+45°, front, right+45° and right). Also, CAMSHIFT algorithm is used for face tracking because of its speed and easy implementation for face. To avoid impact of image analysis’s computations on Real-time application, parallel processing methods were used. Two processes (main and child) were created for this purpose. Child process detects faces periodically on the given frame while the main one process all frames and displays the results of child process to the user screen. In conclusion, our face detection and tracking system has been implemented successfully and it has demonstrated significantly high detection/tracking rates based on the tests on three different image databases and one video database.Yüksek LisansM.Sc

    A Survey on Emotion Recognition for Human Robot Interaction

    Get PDF
    With the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review about recent researches published within each channel, along with the used methodologies and achieved results. Finally, some of the existing emotion recognition issues and recommendations for future works have been outlined
    corecore