9 research outputs found

    Spatiotemporal Stacked Sequential Learning for Pedestrian Detection

    Full text link
    Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera.Comment: 8 pages, 5 figure, 1 tabl

    What Makes a Place? Building Bespoke Place Dependent Object Detectors for Robotics

    Full text link
    This paper is about enabling robots to improve their perceptual performance through repeated use in their operating environment, creating local expert detectors fitted to the places through which a robot moves. We leverage the concept of 'experiences' in visual perception for robotics, accounting for bias in the data a robot sees by fitting object detector models to a particular place. The key question we seek to answer in this paper is simply: how do we define a place? We build bespoke pedestrian detector models for autonomous driving, highlighting the necessary trade off between generalisation and model capacity as we vary the extent of the place we fit to. We demonstrate a sizeable performance gain over a current state-of-the-art detector when using computationally lightweight bespoke place-fitted detector models.Comment: IROS 201

    driver pedestrian interaction under different road environments

    Get PDF
    Abstract The objective of the present study was to analyze the drivers' behavior while approaching pedestrian crossings under different driver – pedestrian interaction conditions and to assess the effectiveness of Advanced Driving Assistance Systems (ADASs) for pedestrian detection among several road environments. Three different road environments were implemented in a fixed-base driving simulator: urban road, sub – urban road and rural road. Several driver – pedestrian interactions were implemented in addition to the pedestrian absence condition. The simulated ADAS provided a visual – auditive message. Forty – five participants drove the three road environments scenarios in which three pedestrian crossroads were implemented (pedestrian absence, pedestrian presence with ADAS and pedestrian presence without ADAS). Overall, 369 driver speed profiles were plotted from 150 m before each pedestrian crossroad. ADAS affected the driver behavior in the interaction conditions with Time-To-Zebraarrive 6 s). The effect of ADAS among the road environments was similar for the urban and sub – urban road, resulting in a less abrupt braking maneuver that began in advance compared to that adopted in ADAS absence condition. For the rural road, the main effect was the reaching of a lower minimum speed near the pedestrian crossing and an advanced end of braking maneuver, highlighting the ability of the driver to complete a safer and effective yielding maneuver

    Histograma del gradiente con múltiples orientaciones (hog-mo) detección de personas

    Get PDF
    In the field of computer vision, the people classification problem has not yet been fully resolved and there are still many unknowns to solve. This paper presents a new version of the HOG descriptor, which is based on multi-orientation and human body parts, using the SVM classifier. The ROC curves show the discrimination power of each case analyzed, compared with the state of the art algorithms. Finally, a system for people detecting in urban environments is presented. This detector has been tested on a database created for applications in Intelligent Transport Systems (ITS).En el campo de la visión por computador el problema de la clasificación de personas aún permanece como un desafío abierto de investigación. Por lo tanto, en este trabajo se realizan las siguientes aportaciones. Primero se introduce un nuevo método de extracción de características basado en el descriptor HOG (Histogram of Oriented Gradient) con múltiples orientaciones del gradiente, calculado sobre partes del cuerpo humano, denominado HOG-MO. Luego se construye un clasificador utilizando HOG-MO y SVM, se verifica su desempeño al compararlo con otras propuestas del estado del arte mediante las curvas ROC, logrando un adecuado equilibrio entre tiempo de cómputo y tasa de clasificación. En seguida se construye un sistema mono-cámara de detección de personas que trabaja en múltiples resoluciones, en el espectro visible, bajo condiciones variables de iluminación y de escala. Este sistema ha sido probado sobre una base de datos de personas en ambientes urbanos, en el espectro visible (BD-AU), creada para el desarrollo de aplicaciones en sistemas inteligentes de transporte (SIT) para la detección de peatones

    Distributed pedestrian detection alerts based on data fusion with accurate localization

    Get PDF
    Among Advanced Driver Assistance Systems (ADAS) pedestrian detection is a common issue due to the vulnerability of pedestrians in the event of accidents. In the present work, a novel approach for pedestrian detection based on data fusion is presented. Data fusion helps to overcome the limitations inherent to each detection system (computer vision and laser scanner) and provides accurate and trustable tracking of any pedestrian movement. The application is complemented by an efficient communication protocol, able to alert vehicles in the surroundings by a fast and reliable communication. The combination of a powerful location, based on a GPS with inertial measurement, and accurate obstacle localization based on data fusion has allowed locating the detected pedestrians with high accuracy. Tests proved the viability of the detection system and the efficiency of the communication, even at long distances. By the use of the alert communication, dangerous situations such as occlusions or misdetections can be avoided.This work was supported by the Spanish Government through the Cicyt projects (GRANT TRA2010-20225-C03-01, GRANT TRA2010-20225-C03-03, GRANT TRA 2011-29454-C03-02 and iVANET TRA2010-15645) and CAM through SEGVAUTO-II (S2009/DPI-1509)

    Vehicle–pedestrian interactions into and outside of crosswalks: effects of driver assistance systems

    Get PDF
    This study aimed to analyse the driver’s behaviour during the interaction with a pedestrian crossing into and outside the zebra crossing, and evaluate the effectiveness of two kinds of Advanced Driver Assistance System (ADAS) that provided to the driver an auditory alert, and a visual alert to detect the pedestrian. 42 participants joined the experiment conducted using the fixed-base driving simulator of the Department of Engineering (Roma Tre University). They experienced different crossing conditions (legal and illegal) and ADAS conditions (no ADAS, visual warning and auditory warning) in an urban scenario. The parameters Time-To-Arrive (TTA) and Speed Reduction Time (SRT) were obtained from the drivers’ speed profiles in the last 150 m in advance of the conflict point with the pedestrian. Results clearly showed the criticality of illegal crossings. When the pedestrian crossed outside of the crosswalk, the highest number of collision occurred and the ANalysis Of VAriance (ANOVA) returned significant effects on both the dependent variables TTA and SRT, highlighting the higher criticality of the vehicle–pedestrian interaction and the more abrupt yielding manoeuvre. Positive effects (the vehicle–pedestrian interaction was less critical and the yielding manoeuvre was smoother) emerged for both the driver assistance systems, although not statistically significant. Besides, both the driver assistance systems positively affected the behaviour of the average cautious drivers. No significant effects of the warning systems were recorded on the aggressive drivers, which because of their behavioural characteristics ignored the warning alarm. In addition, no significant effects of the warning systems were recorded for the very cautious drivers, which adjusted their behaviour even before the alarm trigger. Finally, the outcomes of the questionnaire submitted to the participants highlighted the clear preference for the auditory warning, probably because of the different physical stimuli that are solicited by the warning signal. The results confirm that adequate pedestrian paths should be planned to avoid jaywalker conditions, which induce the driver to assume critical driving behaviour and provide useful findings of the effectiveness of driver assistance systems for pedestrian detection

    Deep and Transfer Learning Approaches for Pedestrian Identification and Classification in Autonomous Vehicles

    Get PDF
    Pedestrian detection is at the core of autonomous road vehicle navigation systems as they allow a vehicle to understand where potential hazards lie in the surrounding area and enable it to act in such a way that avoids traffic-accidents, which may result in individuals being harmed. In this work, a review of the convolutional neural networks (CNN) to tackle pedestrian detection is presented. We further present models based on CNN and transfer learning. The CNN model with the VGG-16 architecture is further optimised using the transfer learning approach. This paper demonstrates that the use of image augmentation on training data can yield varying results. In addition, a pre-processing system that can be used to prepare 3D spatial data obtained via LiDAR sensors is proposed. This pre-processing system is able to identify candidate regions that can be put forward for classification, whether that be 3D classification or a combination of 2D and 3D classifications via sensor fusion. We proposed a number of models based on transfer learning and convolutional neural networks and achieved over 98% accuracy with the adaptive transfer learning model.</jats:p

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance
    corecore