152 research outputs found

    Fusion Based Safety Application for Pedestrian Detection with Danger Estimation

    Get PDF
    Proceedings of: 14th International Conference on Information Fusion (FUSION 2011). Chicago, Illinois, USA 5-8 July 2011.Road safety applications require the most reliable data. In recent years data fusion is becoming one of the main technologies for Advance Driver Assistant Systems (ADAS) to overcome the limitations of isolated use of the available sensors and to fulfil demanding safety requirements. In this paper a real application of data fusion for road safety for pedestrian detection is presented. Two sets of automobile-emplaced sensors are used to detect pedestrians in urban environments, a laser scanner and a stereovision system. Both systems are mounted in the automobile research platform IVVI 2.0 to test the algorithms in real situations. The different safety issues necessary to develop this fusion application are described. Context information such as velocity and GPS information is also used to provide danger estimation for the detected pedestrians.This work was supported by the Spanish Government through the Cicyt projects FEDORA (GRANT TRA2010- 20225-C03-01 ) , VIDAS-Driver (GRANT TRA2010-21371-C03-02 ).Publicad

    Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions

    Get PDF
    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle

    Novel pattern recognition methods for classification and detection in remote sensing and power generation applications

    Get PDF
    Novel pattern recognition methods for classification and detection in remote sensing and power generation application

    Robust Multi-sensor Data Fusion for Practical Unmanned Surface Vehicles (USVs) Navigation

    Get PDF
    The development of practical Unmanned Surface Vehicles (USVs) are attracting increasing attention driven by their assorted military and commercial application potential. However, addressing the uncertainties presented in practical navigational sensor measurements of an USV in maritime environment remain the main challenge of the development. This research aims to develop a multi-sensor data fusion system to autonomously provide an USV reliable navigational information on its own positions and headings as well as to detect dynamic target ships in the surrounding environment in a holistic fashion. A multi-sensor data fusion algorithm based on Unscented Kalman Filter (UKF) has been developed to generate more accurate estimations of USV’s navigational data considering practical environmental disturbances. A novel covariance matching adaptive estimation algorithm has been proposed to deal with the issues caused by unknown and varying sensor noise in practice to improve system robustness. Certain measures have been designed to determine the system reliability numerically, to recover USV trajectory during short term sensor signal loss, and to autonomously detect and discard permanently malfunctioned sensors, and thereby enabling potential sensor faults tolerance. The performance of the algorithms have been assessed by carrying out theoretical simulations as well as using experimental data collected from a real-world USV projected collaborated with Plymouth University. To increase the degree of autonomy of USVs in perceiving surrounding environments, target detection and prediction algorithms using an Automatic Identification System (AIS) in conjunction with a marine radar have been proposed to provide full detections of multiple dynamic targets in a wider coverage range, remedying the narrow detection range and sensor uncertainties of the AIS. The detection algorithms have been validated in simulations using practical environments with water current effects. The performance of developed multi-senor data fusion system in providing reliable navigational data and perceiving surrounding environment for USV navigation have been comprehensively demonstrated

    A new solution to map dynamic indoor environments

    Get PDF
    Author name used in this publication: G. Q. HuangAuthor name used in this publication: Y. K. Wong2006-2007 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account

    A Review of the Bayesian Occupancy Filter

    Get PDF
    Autonomous vehicle systems are currently the object of intense research within scientific and industrial communities; however, many problems remain to be solved. One of the most critical aspects addressed in both autonomous driving and robotics is environment perception, since it consists of the ability to understand the surroundings of the vehicle to estimate risks and make decisions on future movements. In recent years, the Bayesian Occupancy Filter (BOF) method has been developed to evaluate occupancy by tessellation of the environment. A review of the BOF and its variants is presented in this paper. Moreover, we propose a detailed taxonomy where the BOF is decomposed into five progressive layers, from the level closest to the sensor to the highest abstract level of risk assessment. In addition, we present a study of implemented use cases to provide a practical understanding on the main uses of the BOF and its taxonomy.This work has been founded by the Spanish Ministry of Economy and Competitiveness along with the European Structural and Investment Funds in the National Project TCAP-AUTO (RTC-2015-3942-4) in the program of “Retos Colaboración 2014”

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Proceedings of the 2009 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    The joint workshop of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, and the Vision and Fusion Laboratory (Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)), is organized annually since 2005 with the aim to report on the latest research and development findings of the doctoral students of both institutions. This book provides a collection of 16 technical reports on the research results presented on the 2009 workshop

    Handling Missing Data For Sleep Monitoring Systems

    Get PDF
    Sensor-based sleep monitoring systems can be used to track sleep behavior on a daily basis and provide feedback to their users to promote health and well-being. Such systems can provide data visualizations to enable self-reflection on sleep habits or a sleep coaching service to improve sleep quality. To provide useful feedback, sleep monitoring systems must be able to recognize whether an individual is sleeping or awake. Existing approaches to infer sleep-wake phases, however, typically assume continuous streams of data to be available at inference time. In real-world settings, though, data streams or data samples may be missing, causing severe performance degradation of models trained on complete data streams. In this paper, we investigate the impact of missing data to recognize sleep and wake, and use regression-and interpolation-based imputation strategies to mitigate the errors that might be caused by incomplete data. To evaluate our approach, we use a data set that includes physiological traces-collected using wristbands-, behavioral data-gathered using smartphones-and self-reports from 16 participants over 30 days. Our results show that the presence of missing sensor data degrades the balanced accuracy of the classifier on average by 10-35 percentage points for detecting sleep and wake depending on the missing data rate. The impu-tation strategies explored in this work increase the performance of the classifier by 4-30 percentage points. These results open up new opportunities to improve the robustness of sleep monitoring systems against missing data
    corecore