2,716 research outputs found

    SenseCam image localisation using hierarchical SURF trees

    Get PDF
    The SenseCam is a wearable camera that automatically takes photos of the wearer's activities, generating thousands of images per day. Automatically organising these images for efficient search and retrieval is a challenging task, but can be simplified by providing semantic information with each photo, such as the wearer's location during capture time. We propose a method for automatically determining the wearer's location using an annotated image database, described using SURF interest point descriptors. We show that SURF out-performs SIFT in matching SenseCam images and that matching can be done efficiently using hierarchical trees of SURF descriptors. Additionally, by re-ranking the top images using bi-directional SURF matches, location matching performance is improved further

    Characterization of the non-classical nature of conditionally prepared single photons

    Full text link
    A reliable single photon source is a prerequisite for linear optical quantum computation and for secure quantum key distribution. A criterion yielding a conclusive test of the single photon character of a given source, attainable with realistic detectors, is therefore highly desirable. In the context of heralded single photon sources, such a criterion should be sensitive to the effects of higher photon number contributions, and to vacuum introduced through optical losses, which tend to degrade source performance. In this paper we present, theoretically and experimentally, a criterion meeting the above requirements.Comment: 4 pages; 3 figure

    Head-Pose Invariant Facial Expression Recognition using Convolutional Neural Networks

    Get PDF
    Automatic face analysis has to cope with pose and lighting variations. Especially pose variations are difficult to tackle and many face analysis methods require the use of sophisticated normalization and initialization procedures. We propose a data-driven face analysis approach that is not only capable of extracting features relevant to a given face analysis task, but is also more robust with regard to face location changes and scale variations when compared to classical methods such as e.g. MLPs. Our approach is based on convolutional neural networks that use multi-scale feature extractors, which allow for improved facial expression recognition results with faces subject to in-plane pose variations

    Robust Face Analysis using Convolutional Neural Networks

    Get PDF
    Automatic face analysis has to cope with pose and lighting variations. Especially pose variations are difficult to tackle and many face analysis methods require the use of sophisticated normalization procedures. We propose a data-driven face analysis approach that is not only capable of extracting features relevant to a given face analysis task, but is also robust with regard to face location changes and scale variations. This is achieved by deploying convolutional neural networks, which are either trained for facial expression recognition or face identity recognition. Combining the outputs of these networks allows us to obtain a subject dependent or personalized recognition of facial expressions

    Mutliscale Facial Expression Recognition using Convolutional Neural Networks

    Get PDF
    Automatic face analysis has to cope with pose and lighting variations. Especially pose variations are difficult to tackle and many face analysis methods require the use of sophisticated normalization procedures. We propose a data-driven face analysis approach that is not only capable of extracting features relevant to a given face analysis task, but is also robust with regard to face location changes and scale variations. This is achieved by deploying convolutional neural networks. We show that the use of multi-scale feature extractors and whole-field feature map summing neurons allow to improve facial expression recognition results, especially with test sets that feature scale, respectively, translation changes

    Facial Expression Analysis using Shape and Motion Information Extracted by Convolutional Neural Networks

    Get PDF
    In this paper we discuss a neural networks-based face analysis approach that is able to cope with faces subject to pose and lighting variations. Especially head pose variations are difficult to tackle and many face analysis methods require the use of sophisticated normalization procedures. Data-driven shape and motion-based face analysis approaches are introduced that are not only capable of extracting features relevant to a given face analysis task at hand, but are also robust with regard to translation and scale variations. This is achieved by deploying convolutional and time-delayed neural networks, which are either trained for face shape deformation or facial motion analysis

    Head-Pose Invariant Facial Expression Recognition using Convolutional Neural Networks

    Get PDF
    Automatic face analysis has to cope with pose and lighting variations. Especially pose variations are difficult to tackle and many face analysis methods require the use of sophisticated normalization and initialization procedures. We propose a data-driven face analysis approach that is not only capable of extracting features relevant to a given face analysis task, but is also more robust with regard to face location changes and scale variations when compared to classical methods such as e.g. MLPs. Our approach is based on convolutional neural networks that use multi-scale feature extractors, which allow for improved facial expression recognition results with faces subject to in-plane pose variations

    Fast Multi-Scale Face Detection

    Get PDF
    Computerized human face processing (detection, recognition, synthesis) has known an intense research activity during the last few years. Applications involving human face recognition are very broad with an important commercial impacts. Human face processing is a difficult and challenging task: the space of different facial patterns is huge. The variability of human faces as well as their similarity and the influence of other features like beard, glasses, hair, illumination, background etc., make face recognition or face detection difficult to tackle. The main task during the internship was to study and implement a neural-network based face detection algorithm for general scenes, which has previously been developed within the IDIAP Computer Vision group. It also included the study and design of a multi-scale face detection method. A face database and a camera were available to make tests and perform some benchmarking. The main constaint was to have a real-time or almost real-time face detection system. This has beeen achieved. Evaluation of the face detection capability of the employed neural networks were demonstrated on a variety of still images. In addition, we introdudced an efficient preprocessing stage and a new post-processing strategy to eliminate false detections significantly. This allowed to deploy a single neural network for face detection running in a sequential manner on a standard workstation

    High performance guided-wave asynchronous heralded single photon source

    Get PDF
    We report on a guided wave heralded photon source based on the creation of non-degenerate photon pairs by spontaneous parametric down conversion in a Periodically Poled Lithium Niobate waveguide. Using the signal photon at 1310 nm as a trigger, a gated detection process permits announcing the arrival of single photons at 1550 nm at the output of a single mode optical fiber with a high probability of 0.38. At the same time the multi-photon emission probability is reduced by a factor of 10 compared to poissonian light sources. Relying on guided wave technologies such as integrated optics and fiber optics components, our source offers stability, compactness and efficiency and can serve as a paradigm for guided wave devices applied to quantum communication and computation using existing telecom networks
    • 

    corecore