44 research outputs found

    3D Reconstruction using Active Illumination

    Get PDF
    In this thesis we present a pipeline for 3D model acquisition. Generating 3D models of real-world objects is an important task in computer vision with many applications, such as in 3D design, archaeology, entertainment, and virtual or augmented reality. The contribution of this thesis is threefold: we propose a calibration procedure for the cameras, we describe an approach for capturing and processing photometric normals using gradient illuminations in the hardware set-up, and finally we present a multi-view photometric stereo 3D reconstruction method. In order to obtain accurate results using multi-view and photometric stereo reconstruction, the cameras are calibrated geometrically and photometrically. For acquiring data, a light stage is used. This is a hardware set-up that allows to control the illumination during acquisition. The procedure used to generate appropriate illuminations and to process the acquired data to obtain accurate photometric normals is described. The core of the pipeline is a multi-view photometric stereo reconstruction method. In this method, we first generate a sparse reconstruction using the acquired images and computed normals. In the second step, the information from the normal maps is used to obtain a dense reconstruction of an object’s surface. Finally, the reconstructed surface is filtered to remove artifacts introduced by the dense reconstruction step

    Using the 3D shape of the nose for biometric authentication

    Get PDF

    Reconnaissance Biométrique par Fusion Multimodale de Visages

    Get PDF
    Biometric systems are considered to be one of the most effective methods of protecting and securing private or public life against all types of theft. Facial recognition is one of the most widely used methods, not because it is the most efficient and reliable, but rather because it is natural and non-intrusive and relatively accepted compared to other biometrics such as fingerprint and iris. The goal of developing biometric applications, such as facial recognition, has recently become important in smart cities. Over the past decades, many techniques, the applications of which include videoconferencing systems, facial reconstruction, security, etc. proposed to recognize a face in a 2D or 3D image. Generally, the change in lighting, variations in pose and facial expressions make 2D facial recognition less than reliable. However, 3D models may be able to overcome these constraints, except that most 3D facial recognition methods still treat the human face as a rigid object. This means that these methods are not able to handle facial expressions. In this thesis, we propose a new approach for automatic face verification by encoding the local information of 2D and 3D facial images as a high order tensor. First, the histograms of two local multiscale descriptors (LPQ and BSIF) are used to characterize both 2D and 3D facial images. Next, a tensor-based facial representation is designed to combine all the features extracted from 2D and 3D faces. Moreover, to improve the discrimination of the proposed tensor face representation, we used two multilinear subspace methods (MWPCA and MDA combined with WCCN). In addition, the WCCN technique is applied to face tensors to reduce the effect of intra-class directions using a normalization transform, as well as to improve the discriminating power of MDA. Our experiments were carried out on the three largest databases: FRGC v2.0, Bosphorus and CASIA 3D under different facial expressions, variations in pose and occlusions. The experimental results have shown the superiority of the proposed approach in terms of verification rate compared to the recent state-of-the-art method

    Object Recognition

    Get PDF
    Vision-based object recognition tasks are very familiar in our everyday activities, such as driving our car in the correct lane. We do these tasks effortlessly in real-time. In the last decades, with the advancement of computer technology, researchers and application developers are trying to mimic the human's capability of visually recognising. Such capability will allow machine to free human from boring or dangerous jobs

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Compression, pose tracking, and halftoning

    Get PDF
    In this thesis, we discuss image compression, pose tracking, and halftoning. Although these areas seem to be unrelated at first glance, they can be connected through video coding as application scenario. Our first contribution is an image compression algorithm based on a rectangular subdivision scheme which stores only a small subsets of the image points. From these points, the remained of the image is reconstructed using partial differential equations. Afterwards, we present a pose tracking algorithm that is able to follow the 3-D position and orientation of multiple objects simultaneously. The algorithm can deal with noisy sequences, and naturally handles both occlusions between different objects, as well as occlusions occurring in kinematic chains. Our third contribution is a halftoning algorithm based on electrostatic principles, which can easily be adjusted to different settings through a number of extensions. Examples include modifications to handle varying dot sizes or hatching. In the final part of the thesis, we show how to combine our image compression, pose tracking, and halftoning algorithms to novel video compression codecs. In each of these four topics, our algorithms yield excellent results that outperform those of other state-of-the-art algorithms.In dieser Arbeit werden die auf den ersten Blick vollkommen voneinander unabhängig erscheinenden Bereiche Bildkompression, 3D-Posenschätzung und Halbtonverfahren behandelt und im Bereich der Videokompression sinnvoll zusammengeführt. Unser erster Beitrag ist ein Bildkompressionsalgorithmus, der auf einem rechteckigen Unterteilungsschema basiert. Dieser Algorithmus speichert nur eine kleine Teilmenge der im Bild vorhandenen Punkte, während die restlichen Punkte mittels partieller Differentialgleichungen rekonstruiert werden. Danach stellen wir ein Posenschätzverfahren vor, welches die 3D-Position und Ausrichtung von mehreren Objekten anhand von Bilddaten gleichzeitig verfolgen kann. Unser Verfahren funktioniert bei verrauschten Videos und im Falle von Objektüberlagerungen. Auch Verdeckungen innerhalb einer kinematischen Kette werden natürlich behandelt. Unser dritter Beitrag ist ein Halbtonverfahren, das auf elektrostatischen Prinzipien beruht. Durch eine Reihe von Erweiterungen kann dieses Verfahren flexibel an verschiedene Szenarien angepasst werden. So ist es beispielsweise möglich, verschiedene Punktgrößen zu verwenden oder Schraffuren zu erzeugen. Der letzte Teil der Arbeit zeigt, wie man unseren Bildkompressionsalgorithmus, unser Posenschätzverfahren und unser Halbtonverfahren zu neuen Videokompressionsalgorithmen kombinieren kann. Die für jeden der vier Themenbereiche entwickelten Verfahren erzielen hervorragende Resultate, welche die Ergebnisse anderer moderner Verfahren übertreffen

    Using contour information and segmentation for object registration, modeling and retrieval

    Get PDF
    This thesis considers different aspects of the utilization of contour information and syntactic and semantic image segmentation for object registration, modeling and retrieval in the context of content-based indexing and retrieval in large collections of images. Target applications include retrieval in collections of closed silhouettes, holistic w ord recognition in handwritten historical manuscripts and shape registration. Also, the thesis explores the feasibility of contour-based syntactic features for improving the correspondence of the output of bottom-up segmentation to semantic objects present in the scene and discusses the feasibility of different strategies for image analysis utilizing contour information, e.g. segmentation driven by visual features versus segmentation driven by shape models or semi-automatic in selected application scenarios. There are three contributions in this thesis. The first contribution considers structure analysis based on the shape and spatial configuration of image regions (socalled syntactic visual features) and their utilization for automatic image segmentation. The second contribution is the study of novel shape features, matching algorithms and similarity measures. Various applications of the proposed solutions are presented throughout the thesis providing the basis for the third contribution which is a discussion of the feasibility of different recognition strategies utilizing contour information. In each case, the performance and generality of the proposed approach has been analyzed based on extensive rigorous experimentation using as large as possible test collections
    corecore