613 research outputs found

    Integration of LiDAR and stereoscopic imagery for route corridor surveying

    Get PDF
    Transportation networks are, typically, one of the most economic valuable resources for any nation requiring a large percentage of GDP to build and maintain. These route corridors attract their own unique set of spatial information requirements in terms of overall management including planning, engineering and operation. Various disciplines within a road management agency require high quality, spatial data of objects and features occurring along these networks from road infrastructure, sub-surface pavement condition through to modelling noise. This paper examines the integration of relatively novel sensor data against some pressing spatial information requirements for a small European road management agency. LiDAR systems are widely available and now used to record data from both aerial and terrestrial survey platforms. One of the chief LiDAR outputs are X,Y,Z points enabling a reliable 2.5-D geometric surface to be produced. Stereoscopic imagery is also collected from similar airborne and terrestrial mobile platforms. Both provide different datasets in terms of their respective optical and geometric properties. For example, stereoscopic cameras mounted on a survey vehicle record different data compared to LiDAR mounted near vertically on an airborne platform. Airborne LiDAR provides a more comprehensive geometric record whereas stereoscopic imagery can be used to provide a more comprehensive visual descriptor of the immediate route corridor. Acquisition systems for both sensors are relatively well understood and developed. Both systems collect large volumes of data that require a significant amount of data processing in order to produce useful information. A more efficient result can be achieved by integrating these two datasets within a GIS. The preliminary results of integration of airborne LiDAR with ground based stereo imaging systems are presented. How well this integration satisfies the growing spatial information requirements of the road agency are also examined

    Air pollution and fog detection through vehicular sensors

    Get PDF
    We describe a method for the automatic recognition of air pollution and fog from a vehicle. Our system consists of sensors to acquire main data from cameras as well as from Light Detection and Recognition (LIDAR) instruments. We discuss how this data can be collected, analyzed and merged to determine the degree of air pollution or fog. Such data is essential for control systems of moving vehicles in making autonomous decisions for avoidance. Backend systems need such data for forecasting and strategic traffic planning and control. Laboratory based experimental results are presented for weather conditions like air pollution and fog, showing that the recognition scenario works with better than adequate results. This paper demonstrates that LIDAR technology, already onboard for the purpose of autonomous driving, can be used to improve weather condition recognition when compared with a camera only system. We conclude that the combination of a front camera and a LIDAR laser scanner is well suited as a sensor instrument set for air pollution and fog recognition that can contribute accurate data to driving assistance and weather alerting-systems

    Localization in Unstructured Environments: Towards Autonomous Robots in Forests with Delaunay Triangulation

    Full text link
    Autonomous harvesting and transportation is a long-term goal of the forest industry. One of the main challenges is the accurate localization of both vehicles and trees in a forest. Forests are unstructured environments where it is difficult to find a group of significant landmarks for current fast feature-based place recognition algorithms. This paper proposes a novel approach where local observations are matched to a general tree map using the Delaunay triangularization as the representation format. Instead of point cloud based matching methods, we utilize a topology-based method. First, tree trunk positions are registered at a prior run done by a forest harvester. Second, the resulting map is Delaunay triangularized. Third, a local submap of the autonomous robot is registered, triangularized and matched using triangular similarity maximization to estimate the position of the robot. We test our method on a dataset accumulated from a forestry site at Lieksa, Finland. A total length of 2100\,m of harvester path was recorded by an industrial harvester with a 3D laser scanner and a geolocation unit fixed to the frame. Our experiments show a 12\,cm s.t.d. in the location accuracy and with real-time data processing for speeds not exceeding 0.5\,m/s. The accuracy and speed limit is realistic during forest operations

    Estimating meteorological visibility range under foggy weather conditions: A deep learning approach

    Get PDF
    © 2018 The Authors. Published by Elsevier Ltd. Systems capable of estimating visibility distances under foggy weather conditions are extremely useful for next-generation cooperative situational awareness and collision avoidance systems. In this paper, we present a brief review of noticeable approaches for determining visibility distance under foggy weather conditions. We then propose a novel approach based on the combination of a deep learning method for feature extraction and an SVM classifier. We present a quantitative evaluation of the proposed solution and show that our approach provides better performance results compared to an earlier approach that was based on the combination of an ANN model and a set of global feature descriptors. Our experimental results show that the proposed solution presents very promising results in support for next-generation situational awareness and cooperative collision avoidance systems. Hence it can potentially contribute towards safer driving conditions in the presence of fog

    Uses and Challenges of Collecting LiDAR Data from a Growing Autonomous Vehicle Fleet: Implications for Infrastructure Planning and Inspection Practices

    Get PDF
    Autonomous vehicles (AVs) that utilize LiDAR (Light Detection and Ranging) and other sensing technologies are becoming an inevitable part of transportation industry. Concurrently, transportation agencies are increasingly challenged with the management and tracking of large-scale highway asset inventory. LiDAR has become popular among transportation agencies for highway asset management given its advantage over traditional surveying methods. The affordability of LiDAR technology is increasing day by day. Given this, there will be substantial challenges and opportunities for the utilization of big data resulting from the growth of AVs with LiDAR. A proper understanding of the data size generated from this technology will help agencies in making decisions regarding storage, management, and transmission of the data. The original raw data generated from the sensor shrinks a lot after filtering and processing following the Cache county Road Manual and storing into ASPRS recommended (.las) file format. In this pilot study, it is found that while considering the road centerline as the vehicle trajectory larger portion of the data fall into the right of way section compared to the actual vehicle trajectory in Cache County, UT. And there is a positive relation between the data size and vehicle speed in terms of the travel lanes section given the nature of the selected highway environment

    Road Surface Feature Extraction and Reconstruction of Laser Point Clouds for Urban Environment

    Get PDF
    Automakers are developing end-to-end three-dimensional (3D) mapping system for Advanced Driver Assistance Systems (ADAS) and autonomous vehicles (AVs). Using geomatics, artificial intelligence, and SLAM (Simultaneous Localization and Mapping) systems to handle all stages of map creation, sensor calibration and alignment. It is crucial to have a system highly accurate and efficient as it is an essential part of vehicle controls. Such mapping requires significant resources to acquire geographic information (GIS and GPS), optical laser and radar spectroscopy, Lidar, and 3D modeling applications in order to extract roadway features (e.g., lane markings, traffic signs, road-edges) detailed enough to construct a “base map”. To keep this map current, it is necessary to update changes due to occurring events such as construction changes, traffic patterns, or growth of vegetation. The information of the road play a very important factor in road traffic safety and it is essential for for guiding autonomous vehicles (AVs), and prediction of upcoming road situations within AVs. The data size of the map is extensive due to the level of information provided with different sensor modalities for that reason a data optimization and extraction from three-dimensional (3D) mobile laser scanning (MLS) point clouds is presented in this thesis. The research shows the proposed hybrid filter configuration together with the dynamic developed mechanism provides significant reduction of the point cloud data with reduced computational or size constraints. The results obtained in this work are proven by a real-world system

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    DATA PROCESSING AND RECORDING USING A VERSATILE MULTI-SENSOR VEHICLE

    Get PDF
    In this paper we present a versatile multi-sensor vehicle which is used in several research projects. The vehicle is equipped with various sensors in order to cover the needs of different research projects in the area of object detection and tracking, mobile mapping and change detection. We show an example for the capabilities of this vehicle by presenting camera- and LiDAR-based pedestrian detection methods. Besides this specific use case, we provide a more general in-depth description of the vehicle’s hard- and software design and its data-processing capabilities. The vehicle can be used as a sensor carrier for mobile mapping, but it also offers hardware and software components to allow for an adaptable onboard processing. This enables the development and testing of methods related to real-time applications or high-level driver assistance functions. The vehicle’s hardware and software layout result from several years of experience, and our lessons learned can help other researchers set up their own experimental platform

    COBE's search for structure in the Big Bang

    Get PDF
    The launch of Cosmic Background Explorer (COBE) and the definition of Earth Observing System (EOS) are two of the major events at NASA-Goddard. The three experiments contained in COBE (Differential Microwave Radiometer (DMR), Far Infrared Absolute Spectrophotometer (FIRAS), and Diffuse Infrared Background Experiment (DIRBE)) are very important in measuring the big bang. DMR measures the isotropy of the cosmic background (direction of the radiation). FIRAS looks at the spectrum over the whole sky, searching for deviations, and DIRBE operates in the infrared part of the spectrum gathering evidence of the earliest galaxy formation. By special techniques, the radiation coming from the solar system will be distinguished from that of extragalactic origin. Unique graphics will be used to represent the temperature of the emitting material. A cosmic event will be modeled of such importance that it will affect cosmological theory for generations to come. EOS will monitor changes in the Earth's geophysics during a whole solar color cycle

    Earth Resources: A continuing bibliography with indexes, issue 33

    Get PDF
    This bibliography list 436 reports, articles, and other documents introduced into the NASA Scientific and Technical Information System. Emphasis is placed on the use of remote sensing and geophysical instrumentation in spacecraft and aircraft to survey and inventory natural resources and urban areas. Subject matter is grouped according to agriculture and forestry, environmental changes and cultural resources, geodesy and cartography, geology and mineral resources, hydrology and water management, data processing and distribution sytems, instrumentation and sensors, and economic analysis
    • …
    corecore