2,566 research outputs found
Calibration-free Pedestrian Partial Pose Estimation Using a High-mounted Kinect
Les applications de l’analyse du comportement humain ont subit de rapides développements durant les dernières décades, tant au niveau des systèmes de divertissements que pour des applications professionnelles comme les interfaces humain-machine, les systèmes d’assistance de conduite automobile ou des systèmes de protection des piétons. Cette thèse traite du problème de reconnaissance de piétons ainsi qu’à l’estimation de leur orientation en 3D. Cette estimation est faite dans l’optique que la connaissance de cette orientation est bénéfique tant au niveau de l’analyse que de la prédiction du comportement des piétons. De ce fait, cette thèse propose à la fois une nouvelle méthode pour détecter les piétons et une manière d’estimer leur orientation, par l’intégration séquentielle d’un module de détection et un module d’estimation d’orientation. Pour effectuer cette détection de piéton, nous avons conçu un classificateur en cascade qui génère automatiquement une boîte autour des piétons détectés dans l’image. Suivant cela, des régions sont extraites d’un nuage de points 3D afin de classifier l’orientation du torse du piéton. Cette classification se base sur une image synthétique grossière par tramage (rasterization) qui simule une caméra virtuelle placée immédiatement au-dessus du piéton détecté. Une machine à vecteurs de support effectue la classification à partir de cette image de synthèse, pour l’une des 10 orientations discrètes utilisées lors de l’entrainement (incréments de 30 degrés). Afin de valider les performances de notre approche d’estimation d’orientation, nous avons construit une base de données de référence contenant 764 nuages de points. Ces données furent capturées à l’aide d’une caméra Kinect de Microsoft pour 30 volontaires différents, et la vérité-terrain sur l’orientation fut établie par l’entremise d’un système de capture de mouvement Vicon. Finalement, nous avons démontré les améliorations apportées par notre approche. En particulier, nous pouvons détecter des piétons avec une précision de 95.29% et estimer l’orientation du corps (dans un intervalle de 30 degrés) avec une précision de 88.88%. Nous espérons ainsi que nos résultats de recherche puissent servir de point de départ à d’autres recherches futures.The application of human behavior analysis has undergone rapid development during the last decades from entertainment system to professional one, as Human Robot Interaction (HRI), Advanced Driver Assistance System (ADAS), Pedestrian Protection System (PPS), etc. Meanwhile, this thesis addresses the problem of recognizing pedestrians and estimating their body orientation in 3D based on the fact that estimating a person’s orientation is beneficial in determining their behavior. In this thesis, a new method is proposed for detecting and estimating the orientation, in which the result of a pedestrian detection module and a orientation estimation module are integrated sequentially. For the goal of pedestrian detection, a cascade classifier is designed to draw a bounding box around the detected pedestrian. Following this, extracted regions are given to a discrete orientation classifier to estimate pedestrian body’s orientation. This classification is based on a coarse, rasterized depth image simulating a top-view virtual camera, and uses a support vector machine classifier that was trained to distinguish 10 orientations (30 degrees increments). In order to test the performance of our approach, a new benchmark database contains 764 sets of point cloud for body-orientation classification was captured. For this benchmark, a Kinect recorded the point cloud of 30 participants and a marker-based motion capture system (Vicon) provided the ground truth on their orientation. Finally we demonstrated the improvements brought by our system, as it detected pedestrian with an accuracy of 95:29% and estimated the body orientation with an accuracy of 88:88%.We hope it can provide a new foundation for future researches
Sensor fusion in driving assistance systems
MenciĂłn Internacional en el tĂtulo de doctorLa vida diaria en los paĂses desarrollados y en vĂas de desarrollo depende en
gran medida del transporte urbano y en carretera. Esta actividad supone un
coste importante para sus usuarios activos y pasivos en términos de polución
y accidentes, muy habitualmente debidos al factor humano. Los nuevos desarrollos
en seguridad y asistencia a la conducciĂłn, llamados Advanced Driving
Assistance Systems (ADAS), buscan mejorar la seguridad en el transporte, y
a medio plazo, llegar a la conducciĂłn autĂłnoma.
Los ADAS, al igual que la conducción humana, están basados en sensores
que proporcionan informaciĂłn acerca del entorno, y la fiabilidad de los sensores
es crucial para las aplicaciones ADAS al igual que las capacidades
sensoriales lo son para la conducciĂłn humana. Una de las formas de aumentar
la fiabilidad de los sensores es el uso de la FusiĂłn Sensorial, desarrollando
nuevas estrategias para el modelado del entorno de conducciĂłn gracias al uso
de diversos sensores, y obteniendo una informaciĂłn mejorada a partid de los
datos disponibles.
La presente tesis pretende ofrecer una soluciĂłn novedosa para la detecciĂłn
y clasificación de obstáculos en aplicaciones de automoción, usando fusión
vii
sensorial con dos sensores ampliamente disponibles en el mercado: la cámara
de espectro visible y el escáner láser. Cámaras y láseres son sensores
comĂşnmente usados en la literatura cientĂfica, cada vez más accesibles y listos
para ser empleados en aplicaciones reales. La soluciĂłn propuesta permite la
detección y clasificación de algunos de los obstáculos comúnmente presentes
en la vĂa, como son ciclistas y peatones.
En esta tesis se han explorado novedosos enfoques para la detecciĂłn y clasificaciĂłn,
desde la clasificaciĂłn empleando clusters de nubes de puntos obtenidas
desde el escáner láser, hasta las técnicas de domain adaptation para la creación
de bases de datos de imágenes sintéticas, pasando por la extracción inteligente
de clusters y la detecciĂłn y eliminaciĂłn del suelo en nubes de puntos.Life in developed and developing countries is highly dependent on road and
urban motor transport. This activity involves a high cost for its active and passive
users in terms of pollution and accidents, which are largely attributable to
the human factor. New developments in safety and driving assistance, called
Advanced Driving Assistance Systems (ADAS), are intended to improve
security in transportation, and, in the mid-term, lead to autonomous driving.
ADAS, like the human driving, are based on sensors, which provide information
about the environment, and sensors’ reliability is crucial for ADAS
applications in the same way the sensing abilities are crucial for human driving.
One of the ways to improve reliability for sensors is the use of Sensor
Fusion, developing novel strategies for environment modeling with the help of
several sensors and obtaining an enhanced information from the combination
of the available data.
The present thesis is intended to offer a novel solution for obstacle detection
and classification in automotive applications using sensor fusion with two
highly available sensors in the market: visible spectrum camera and laser
scanner. Cameras and lasers are commonly used sensors in the scientific
literature, increasingly affordable and ready to be deployed in real world
applications. The solution proposed provides obstacle detection and classification
for some obstacles commonly present in the road, such as pedestrians and bicycles.
Novel approaches for detection and classification have been explored in this
thesis, from point cloud clustering classification for laser scanner, to domain
adaptation techniques for synthetic dataset creation, and including intelligent
clustering extraction and ground detection and removal from point clouds.Programa Oficial de Doctorado en IngenierĂa ElĂ©ctrica, ElectrĂłnica y AutomáticaPresidente: Cristina Olaverri Monreal.- Secretario: Arturo de la Escalera Hueso.- Vocal: JosĂ© Eugenio Naranjo Hernánde
Pedestrian Behavior Study to Advance Pedestrian Safety in Smart Transportation Systems Using Innovative LiDAR Sensors
Pedestrian safety is critical to improving walkability in cities. Although walking trips have increased in the last decade, pedestrian safety remains a top concern. In 2020, 6,516 pedestrians were killed in traffic crashes, representing the most deaths since 1990 (NHTSA, 2020). Approximately 15% of these occurred at signalized intersections where a variety of modes converge, leading to the increased propensity of conflicts. Current signal timing and detection technologies are heavily biased towards vehicular traffic, often leading to higher delays and insufficient walk times for pedestrians, which could result in risky behaviors such as noncompliance. Current detection systems for pedestrians at signalized intersections consist primarily of push buttons. Limitations include the inability to provide feedback to the pedestrian that they have been detected, especially with older devices, and not being able to dynamically extend the walk times if the pedestrians fail to clear the crosswalk. Smart transportation systems play a vital role in enhancing mobility and safety and provide innovative techniques to connect pedestrians, vehicles, and infrastructure. Most research on smart and connected technologies is focused on vehicles; however, there is a critical need to harness the power of these technologies to study pedestrian behavior, as pedestrians are the most vulnerable users of the transportation system. While a few studies have used location technologies to detect pedestrians, this coverage is usually small and favors people with smartphones. However, the transportation system must consider a full spectrum of pedestrians and accommodate everyone.
In this research, the investigators first review the previous studies on pedestrian behavior data and sensing technologies. Then the research team developed a pedestrian behavioral data collecting system based on the emerging LiDAR sensors. The system was deployed at two signalized intersections. Two studies were conducted: (a) pedestrian behaviors study at signalized intersections, analyzing the pedestrian waiting time before crossing, generalized perception-reaction time to WALK sign and crossing speed; and (b) a novel dynamic flashing yellow arrow (D-FYA) solution to separate permissive left-turn vehicles from concurrent crossing pedestrians. The results reveal that the pedestrian behaviors may have evolved compared with the recommended behaviors in the pedestrian facility design guideline (e.g., AASHTO’s “Green Book”). The D-FYA solution was also evaluated on the cabinet-in-theloop simulation platform and the improvements were promising. The findings in this study will advance the body of knowledge on equitable traffic safety, especially for pedestrian safety in the future
Carried baggage detection and recognition in video surveillance with foreground segmentation
Security cameras installed in public spaces or in private organizations continuously
record video data with the aim of detecting and preventing crime. For that reason,
video content analysis applications, either for real time (i.e. analytic) or post-event
(i.e. forensic) analysis, have gained high interest in recent years. In this thesis,
the primary focus is on two key aspects of video analysis, reliable moving object
segmentation and carried object detection & identification.
A novel moving object segmentation scheme by background subtraction is presented
in this thesis. The scheme relies on background modelling which is based
on multi-directional gradient and phase congruency. As a post processing step,
the detected foreground contours are refined by classifying the edge segments as
either belonging to the foreground or background. Further contour completion
technique by anisotropic diffusion is first introduced in this area. The proposed
method targets cast shadow removal, gradual illumination change invariance, and
closed contour extraction.
A state of the art carried object detection method is employed as a benchmark
algorithm. This method includes silhouette analysis by comparing human temporal
templates with unencumbered human models. The implementation aspects of
the algorithm are improved by automatically estimating the viewing direction of
the pedestrian and are extended by a carried luggage identification module. As
the temporal template is a frequency template and the information that it provides
is not sufficient, a colour temporal template is introduced. The standard
steps followed by the state of the art algorithm are approached from a different
extended (by colour information) perspective, resulting in more accurate carried
object segmentation.
The experiments conducted in this research show that the proposed closed
foreground segmentation technique attains all the aforementioned goals. The incremental
improvements applied to the state of the art carried object detection
algorithm revealed the full potential of the scheme. The experiments demonstrate
the ability of the proposed carried object detection algorithm to supersede the
state of the art method
- …