144 research outputs found

    Robust Facial Features Tracking Using Geometric Constraints and Relaxation

    No full text
    International audienceThis work presents a robust technique for tracking a set of detected points on a human face. Facial features can be manually selected or automatically detected. We present a simple and efficient method for detecting facial features such as eyes and nose in a color face image. We then introduce a tracking method which, by employing geometric constraints based on knowledge about the configuration of facial features, avoid the loss of points caused by error accumulation and tracking drift. Experiments with different sequences and comparison with other tracking algorithms, show that the proposed method gives better results with a comparable processing time

    Mixed Pooling Neural Networks for Color Constancy

    No full text
    International audienceColor constancy is the ability of the human visual system to perceive constant colors for a surface despite changes in the spectrum of the illumination. In computer vision, the main approach consists in estimating the illuminant color and then to remove its impact on the color of the objects. Many image processing algorithms have been proposed to tackle this problem automatically. However, most of these approaches are handcrafted and mostly rely on strong empirical assumptions, e.g., that the average reflectance in a scene is gray. State-of-the-art approaches can perform very well on some given datasets but poorly adapt on some others. In this paper, we have investigated how neural networks-based approaches can be used to deal with the color constancy problem. We have proposed a new network architecture based on existing successful hand-crafted approaches and a large number of improvements to tackle this problem by learning a suitable deep model. We show our results on most of the standard benchmarks used in the color constancy domain

    Semantic Segmentation via Multi-task, Multi-domain Learning

    No full text
    International audienceWe present an approach that leverages multiple datasets possibly annotated using different classes to improve the semantic segmentation accuracy on each individual dataset. We propose a new selective loss function that can be integrated into deep networks to exploit training data coming from multiple datasets with possibly different tasks (e.g., different label-sets). We show how the gradient-reversal approach for domain adaptation can be used in this setup. Thorought experiments on semantic segmentation applications show the relevance of our approach

    Evaluation of color contrast in complex images

    No full text
    National audienc

    Perception of color and image quality of films projected in cinema or displayed on TV screen

    No full text
    International audienc
    corecore