5 research outputs found

    Real-time human detection in urban scenes: Local descriptors and classifiers selection with adaboost-like algorithms

    Get PDF
    This paper deals with the study of various implementations of the AdaBoost algorithm in order to address the issue of real-time pedestrian detection in images. We use gradient-based local descriptors and we combine them to form strong classifiers organized in a cascaded detector. We compare the original AdaBoost algorithm with two other boosting algorithms we developed. One optimizes the use of each selected descriptor to minimize the operations done in the image (method 1), leading to an acceleration of the detection process without any loss in detection performances. The second algorithm (method 2) improves the selection of the descriptors by associating to each of them a more powerful weak-learner – a decision tree built from the components of the whole descriptor – and by evaluating them locally. We compare the results of these three learning algorithms on a reference database of color images and we then introduce our preliminary results on the adaptation of this detector on infrared vision. Our methods give better detection rates and faster processing than the original boosting algorithm and also provide interesting results for further studies. 1

    Automatic Process to Build a Contextualized Detector

    No full text
    International audienceno abstrac

    Large-scale, drift-free SLAM using highly robustified building model constraints

    No full text
    Conference of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017 ; Conference Date: 24 September 2017 Through 28 September 2017; Conference Code:133565International audienceConstrained key-frame based local bundle adjustment is at the core of many recent systems that address the problem of large-scale, georeferenced SLAM based on a monocular camera and on data from inexpensive sensors and/or databases. The majority of these methods, however, impose constraints that result from proprioceptive sensors (e.g. IMUs, GPS, Odometry) while ignoring the possibility of explicitly constraining the structure (e.g. point cloud) resulting from the reconstruction process. Moreover, research on on-line interactions between SLAM and deep learning methods remains scarce, and as a result, few SLAM systems take advantage of deep architectures. We explore both these areas in this work: we use a fast deep neural network to infer semantic and structural information about the environment, and using a Bayesian framework, inject the results into a bundle adjustment process that constrains the 3d point cloud to texture-less 3d building models

    A region driven and contextualized pedestrian detector

    No full text
    Conference of 8th International Conference on Computer Vision Theory and Applications, VISAPP 2013 ; Conference Date: 21 February 2013 Through 24 February 2013; Conference Code:97053International audienceThis paper tackles the real-time pedestrian detection problem using a stationary calibrated camera. Problems frequently encountered are: a generic classifier can not be adjusted to each situation and the perspective deformations of the camera can profoundly change the appearance of a person. To avoid these drawbacks we contextualized a detector with information coming directly from the scene. Our method comprises three distinct parts. First an oracle gathers examples from the scene. Then, the scene is split in different regions and one classifier is trained for each one. Finally each detector are automatically tuned to achieve the best performances. Designed for making camera network installation procedure easier, our method is completely automatic and does not need any knowledge about the scene
    corecore