6 research outputs found

    Face Recognition in compressed domain based on wavelet transform and kd-tree matching

    Get PDF
    This paper presents a novel idea for implementing face recognition system in compressed domain. A major advantage of the proposed approach is the fact that face recognition systems can directly work with JPEG and JPEG2000 compressed images, i.e. it uses directly the entropy points provided by the compression standards as input without any necessity of completely decompressing the image before recognition. The Kd-tree technique is used in the proposed approach for the matching of the images. This algorithm shows improvement in reducing the computational time of the overall approach. This proposed method significantly improves the recognition rates while greatly reducing computational time and storage requirements

    Multi-View Face Recognition From Single RGBD Models of the Faces

    Get PDF
    This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks

    Riconoscimento di facce in un sistema di visione distribuito

    Get PDF
    This thesis has been carried out within a project at IAS-Lab (Intelligent Autonomous Systems Lab), University of Padua. The project aims to create an autonomous system for distributed video surveillance. The thesis had as main objective the creation of a face detection and recognition system in a distributed sensors network. In particular, we wanted to create a system capable of processing video streams from sensors installed in different locations within a local network, in order to extract from them the images of the people faces in a room and then send the selected images to a central server, instructed to carry out the recognition. The main purpose of this system is to recognize people in the room even if they are not fully visible from some sensors, thus exploiting the distribution of the various cameras in the room. This thesis is mainly based on two aspects: - the creation of a reference model of the room in which the sensors are common to all of them; - the use of a middleware, called NMM, to create a network of nodes which distributes people and face detection and face recognition within a Local Area Network. In this thesis we will use the Viola-Jones algorithm to perform the detection of people and faces and the Eigenfaces algorithm for the recognition of the latter; we will also analyze the performance of the algorithm for recognition and propose some improvements aimed to increase the overall performance of the syste

    Fusing face recognition from multiple cameras

    No full text
    Face recognition from video has recently received much interest. However, several challenges for such a system exist, such as resolution, occlusion (from objects or selfocclusion), motion blur, and illumination. The aim of this paper is to overcome the problem of self-occlusion by observing a person from multiple cameras with uniquely different views of the person’s face and fusing the recognition results in a meaningful way. Each camera may only capture a part of the face, such as the right or left half of the face. We propose a methodology to use cylinder head models (CHMs) to track the face of a subject in multiple cameras. The problem of face recognition from video is then transformed to a still face recognition problem which has been well studied. The recognition results are fused based on the extracted pose of the face. For instance, the recognition result from a frontal face should be weighted higher than the recognition result from a face with a yaw of 30 ◦. Eigenfaces is used for still face recognition along with the average-half-face to reduce the effect of transformation errors. Results of tracking are further aggregated to produce 100 % accuracy using video taken from two cameras in our lab. 1
    corecore