132 research outputs found

    Alerting the drivers about road signs with poor visual saliency

    Full text link
    This paper proposes an improvement of Advanced Driver Assistance System based on saliency estimation of road signs. After a road sign detection stage, its saliency is estimated using a SVM learning. A model of visual saliency linking the size of an object and a size-independent saliency is proposed. An eye tracking experiment in context close to driving proves that this computational evaluation of the saliency fits well with human perception, and demonstrates the applicability of the proposed estimator for improved ADAS

    Uncertainty, generalization, and neural representation of relevant variables for decision making

    Get PDF
    Dissertation presented to obtain the Ph.D degree in Biology, Computational Biology.Understanding decision making in various contexts is fundamental to understanding human behavior. This thesis presents several studies that examine decision making from many different points of view using a variety of research tools.(...

    Perceptual modelling for 2D and 3D

    Get PDF
    Livrable D1.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D1.1 du projet

    An integrated model of visual attention using shape-based features

    Get PDF
    Apart from helping shed some light on human perceptual mechanisms, modeling visual attention has important applications in computer vision. It has been shown to be useful in priming object detection, pruning interest points, quantifying visual clutter as well as predicting human eye movements. Prior work has either relied on purely bottom-up approaches or top-down schemes using simple low-level features. In this paper, we outline a top-down visual attention model based on shape-based features. The same shape-based representation is used to represent both the objects and the scenes that contain them. The spatial priors imposed by the scene and the feature priors imposed by the target object are combined in a Bayesian framework to generate a task-dependent saliency map. We show that our approach can predict the location of objects as well as match eye movements (92% overlap with human observers). We also show that the proposed approach performs better than existing bottom-up and top-down computational models
    • …
    corecore