406 research outputs found

    High-resolution optical and SAR image fusion for building database updating

    Get PDF
    This paper addresses the issue of cartographic database (DB) creation or updating using high-resolution synthetic aperture radar and optical images. In cartographic applications, objects of interest are mainly buildings and roads. This paper proposes a processing chain to create or update building DBs. The approach is composed of two steps. First, if a DB is available, the presence of each DB object is checked in the images. Then, we verify if objects coming from an image segmentation should be included in the DB. To do those two steps, relevant features are extracted from images in the neighborhood of the considered object. The object removal/inclusion in the DB is based on a score obtained by the fusion of features in the framework of Dempster–Shafer evidence theory

    Real-time sidewalk slope calculation through integration of GPS trajectory and image data to assist people with disabilities in navigation

    Get PDF
    People with disabilities face many obstacles in everyday outdoor travels. One of the most notable obstacles is steep slope on sidewalk segments. Current navigation systems/services do not all support map databases with slope attributes and cannot calculate sidewalk slope in real time. In this paper, we present a technique for calculating slopes of sidewalk segments by image data and predict the most suitable route for each individual user through integration with GPS trajectory. In our technique we make use of GPS trajectory data, to identify the sidewalk segment on which the traveler will most probably pass, and images of the identified sidewalk segment. Through edge detection techniques we detect edges of objects, such as buildings, billboards, and walls, in the background. Slope of the segment is then calculated by comparing its line representation in the map with the detected edges. Our experiment result indicates effective calculation of sidewalk slopes

    Multisensor Data Fusion Strategies for Advanced Driver Assistance Systems

    Get PDF
    Multisensor data fusion and integration is a rapidly evolving research area that requires interdisciplinary knowledge in control theory, signal processing, artificial intelligence, probability and statistics, etc. Multisensor data fusion refers to the synergistic combination of sensory data from multiple sensors and related information to provide more reliable and accurate information than could be achieved using a single, independent sensor (Luo et al., 2007). Actually Multisensor data fusion is a multilevel, multifaceted process dealing with automatic detection, association, correlation, estimation, and combination of data from single and multiple information sources. The results of data fusion process help users make decisions in complicated scenarios. Integration of multiple sensor data was originally needed for military applications in ocean surveillance, air-to air and surface-to-air defence, or battlefield intelligence. More recently, multisensor data fusion has also included the nonmilitary fields of remote environmental sensing, medical diagnosis, automated monitoring of equipment, robotics, and automotive systems (Macci et al., 2008). The potential advantages of multisensor fusion and integration are redundancy, complementarity, timeliness, and cost of the information. The integration or fusion of redundant information can reduce overall uncertainty and thus serve to increase the accuracy with which the features are perceived by the system. Multiple sensors providing redundant information can also serve to increase reliability in the case of sensor error or failure. Complementary information from multiple sensors allows features in the environment to be perceived that are impossible to perceive using just the information from each individual sensor operating separately. (Luo et al., 2007) Besides, driving as one of our daily activities is a complex task involving a great amount of interaction between driver and vehicle. Drivers regularly share their attention among operating the vehicle, monitoring traffic and nearby obstacles, and performing secondary tasks such as conversing, adjusting comfort settings (e.g. temperature, radio.) The complexity of the task and uncertainty of the driving environment make driving a very dangerous task, as according to a study in the European member states, there are more than 1,200,000 traffic accidents a year with over 40,000 fatalities. This fact points up the growing demand for automotive safety systems, which aim for a significant contribution to the overall road safety (Tatschke et al., 2006). Therefore, recently, there are an increased number of research activities focusing on the Driver Assistance System (DAS) development in order O pe n A cc es s D at ab as e w w w .in te ch w eb .o r

    Multisource Data Integration in Remote Sensing

    Get PDF
    Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system

    Advances in vision-based lane detection: algorithms, integration, assessment, and perspectives on ACP-based parallel vision

    Get PDF
    Lane detection is a fundamental aspect of most current advanced driver assistance systems (ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous vision-based lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system, and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed

    Fusion of Video and Multi-Waveform FMCW Radar for Traffic Surveillance

    Get PDF
    Modern frequency modulated continuous wave (FMCW) radar technology provides the ability to modify the system transmission frequency as a function of time, which in turn provides the ability to generate multiple output waveforms from a single radar unit. Current low-power multi-waveform FMCW radar techniques lack the ability to reliably associate measurements from the various waveform sections in the presence of multiple targets and multiple false detections within the field-of-view. Two approaches are developed here to address this problem. The first approach takes advantage of the relationships between the waveform segments to generate a weighting function for candidate combinations of measurements from the waveform sections. This weighting function is then used to choose the best candidate combinations to form polar-coordinate measurements. Simulations show that this approach provides a ten to twenty percent increase in the probability of correct association over the current approach while reducing the number of false alarms in generated in the process, but still fails to form a measurement if a detection form a waveform section is missing. The second approach models the multi-waveform FMCW radar as a set of independent sensors and uses distributed data fusion to fuse estimates from those individual sensors within a tracking structure. Tracking in this approach is performed directly with the raw frequency and angle measurements from the waveform segments. This removes the need for data association between the measurements from the individual waveform segments. A distributed data fusion model is used again to modify the radar tracking systems to include a video sensor to provide additional angular and identification information into the system. The combination of the radar and vision sensors, as an end result, provides an enhanced roadside tracking system

    DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range Data

    Full text link
    We introduce the DROW detector, a deep learning based detector for 2D range data. Laser scanners are lighting invariant, provide accurate range data, and typically cover a large field of view, making them interesting sensors for robotics applications. So far, research on detection in laser range data has been dominated by hand-crafted features and boosted classifiers, potentially losing performance due to suboptimal design choices. We propose a Convolutional Neural Network (CNN) based detector for this task. We show how to effectively apply CNNs for detection in 2D range data, and propose a depth preprocessing step and voting scheme that significantly improve CNN performance. We demonstrate our approach on wheelchairs and walkers, obtaining state of the art detection results. Apart from the training data, none of our design choices limits the detector to these two classes, though. We provide a ROS node for our detector and release our dataset containing 464k laser scans, out of which 24k were annotated.Comment: Lucas Beyer and Alexander Hermans contributed equall
    • 

    corecore