150 research outputs found

    DETERMINING WHERE INDIVIDUAL VEHICLES SHOULD NOT DRIVE IN SEMIARID TERRAIN IN VIRGINIA CITY, NV

    Get PDF
    This thesis explored elements involved in determining and mapping where a vehicle should not drive off-road in semiarid areas. Obstacles are anything which slows or obstructs progress (Meyer et al., 1977) or limits the space available for maneuvering (Spenko et al., 2006). This study identified the major factors relevant in determining which terrain features should be considered obstacles when off-road driving and thus should be avoided. These are elements relating to the vehicle itself and how it is driven as well as terrain factors of slope, vegetation, water, and soil. Identification of these in the terrain was done using inferential methods of Terrain Pattern Recognition (TPR), analyzing of remotely sensing data, and Digital Elevation Map (DEM) data analysis. Analysis was further refined using other reference information about the area. Other factors such as weather, driving angle, and environmental impact are discussed. This information was applied to a section of Virginia City, Nevada as a case-study. Analysis and mapping was done purposely without field work prior to mapping to determine what could be assessed using only remote means. Not all findings from the literature review could be implemented in this trafficability study. Some methods and trafficability knowledge could not be implemented and were omitted due to data being unavailable, un-acquirable, or being too coarsely mapped to be useful. Examples of these are Lidar mapping of the area, soil profiling of the terrain, and assessment of plant species present in the area for driven-over traction and tire punctures. The Virginia City section was analyzed and mapped utilizing hyperspectral remotely sensed image data, remote-sensor-derived DEM data was used in a Geographical Information Systems (GIS). Stereo-paired air photos of the study site were used in TPR. Other information on flora, historical weather, and a previous soil survey map were used in a Geographical Information System (GIS). Field validation was used to check findings.The case study's trafficability assessment demonstrated methodologies of terrain analysis which successfully classified many materials present and identified major areas where a vehicle should not drive. The methods used were: Manual TPR of the stereo-paired air photo using a stereo photo viewer to conduct drainage-tracing and slope analysis of the DEM was done using automated methods in ArcMap. The SpecTIR hyperspectral data was analyzed using the manual Environment for Visualizing Images (ENVI) software hourglass procedure. Visual analysis of the hyperspectral data and air photos along with known soil and vegetation characteristics were used to refine analyses. Processed data was georectified using SpecTIR Geographic Lookup Table (GLT) input geometry, and exported to and analyzed in ArcMap with the other data previously listed. Features were identified based on their spectral attributes, spatial properties, and through visual analysis. Inaccuracies in mapping were attributable largely to spatial resolution of Digital Elevation Maps (DEMs) which averaged out some non-drivable obstacles and parts of a drivable road, subjective human and computer decisions during the processing of the data, and grouping of spectral end-members during hyperspectral data analysis. Further refinements to the mapping process could have been made if fieldwork was done during the mapping process.Mapping and field validation found: several manmade and natural obstacles were visible from the ground, but these obstacles were too fine, thin, or small to be identified from the remote sensing data. . Examples are fences and some natural terrain surface roughness - where the terrain's surface deviated from being a smooth surface, exhibiting micro- variations in surface elevation and/or textures. Slope analysis using the 10-meter and 30-meter resolution DEMs did not accurately depict some manmade features [eg. some of the buildings, portions of roads, and fences], evident with a well-trafficked paved road showing in DEM analysis as having too steep a slope [beyond 15°] to be drivable. Some features had been spectrally grouped together during analysis, due to similar spectral properties. Spectral grouping is a process where the spectral class's pixel areas are reviewed and classes which have too few occurrences are averaged into similar classes or dropped entirely. This is done to reduce the number of spectrally unique material classes to those that are most relevant to the terrain mapped. These decisions are subjective and in one case two similar spectral material classes were combined. In later evaluation should have remained as two separate material classes. In field sample collection, some of the determined features; free-standing water and liquid tanks, were found to be inaccessible due to being on private land and/or fence secured. These had to be visually verified - photos were also taken. Further refinements to the mapping could have been made if fieldwork was done during the mapping process. Determining and mapping where a vehicle should not drive in semiarid areas is a complex task which involves many variables and reference data types. Processing, analyzing, and fusing these different references entails subjective manual and automated decisions which are subject to errors and/or inaccuracies at multiple levels that can individually or collectively skew results, causing terrain trafficability to be depicted incorrectly. That said, a usable reference map is creatable which can assist decision makers when determining their route(s)

    Multispectral Deep Neural Network Fusion Method for Low-Light Object Detection

    Get PDF
    Despite significant strides in achieving vehicle autonomy, robust perception under low-light conditions still remains a persistent challenge. In this study, we investigate the potential of multispectral imaging, thereby leveraging deep learning models to enhance object detection performance in the context of nighttime driving. Features encoded from the red, green, and blue (RGB) visual spectrum and thermal infrared images are combined to implement a multispectral object detection model. This has proven to be more effective compared to using visual channels only, as thermal images provide complementary information when discriminating objects in low-illumination conditions. Additionally, there is a lack of studies on effectively fusing these two modalities for optimal object detection performance. In this work, we present a framework based on the Faster R-CNN architecture with a feature pyramid network. Moreover, we design various fusion approaches using concatenation and addition operators at varying stages of the network to analyze their impact on object detection performance. Our experimental results on the KAIST and FLIR datasets show that our framework outperforms the baseline experiments of the unimodal input source and the existing multispectral object detectors

    Thermal Imaging on Smart Vehicles for Person and Road Detection:Can a Lazy Approach Work?

    Get PDF

    Object Detection Using LiDAR and Camera Fusion in Off-road Conditions

    Get PDF
    Seoses hüppelise huvi kasvuga autonoomsete sõidukite vastu viimastel aastatel on suurenenud ka vajadus täpsemate ja töökindlamate objektituvastuse meetodite järele. Kuigi tänu konvolutsioonilistele närvivõrkudele on palju edu saavutatud 2D objektituvastuses, siis võrreldavate tulemuste saavutamine 3D maailmas on seni jäänud unistuseks. Põhjuseks on mitmesugused probleemid eri modaalsusega sensorite andmevoogude ühitamisel, samuti on 3D maailmas märgendatud andmestike loomine aeganõudvam ja kallim. Sõltumata sellest, kas kasutame objektide kauguse hindamiseks stereo kaamerat või lidarit, kaasnevad andmevoogude ühitamisega ajastusprobleemid, mis raskendavad selliste lahenduste kasutamist reaalajas. Lisaks on enamus olemasolevaid lahendusi eelkõige välja töötatud ja testitud linnakeskkonnas liikumiseks.Töös pakutakse välja meetod 3D objektituvastuseks, mis põhineb 2D objektituvastuse tulemuste (objekte ümbritsevad kastid või segmenteerimise maskid) projitseerimisel 3D punktipilve ning saadud punktipilve filtreerimisel klasterdamismeetoditega. Tulemusi võrreldakse lihtsa termokaamera piltide filtreerimisel põhineva lahendusega. Täiendavalt viiakse läbi põhjalikud eksperimendid parimate algoritmi parameetrite leidmiseks objektituvastuseks maastikul, saavutamaks suurimat võimalikku täpsust reaalajas.Since the boom in the industry of autonomous vehicles, the need for preciseenvironment perception and robust object detection methods has grown. While we are making progress with state-of-the-art in 2D object detection with approaches such as convolutional neural networks, the challenge remains in efficiently achieving the same level of performance in 3D. The reasons for this include limitations of fusing multi-modal data and the cost of labelling different modalities for training such networks. Whether we use a stereo camera to perceive scene’s ranging information or use time of flight ranging sensors such as LiDAR, ​ the existing pipelines for object detection in point clouds have certain bottlenecks and latency issues which tend to affect the accuracy of detection in real time speed. Moreover, ​ these existing methods are primarily implemented and tested over urban cityscapes.This thesis presents a fusion based approach for detecting objects in 3D by projecting the proposed 2D regions of interest (object’s bounding boxes) or masks (semantically segmented images) to point clouds and applies outlier filtering techniques to filter out target object points in projected regions of interest. Additionally, we compare it with human detection using thermal image thresholding and filtering. Lastly, we performed rigorous benchmarks over the off-road environments to identify potential bottlenecks and to find a combination of pipeline parameters that can maximize the accuracy and performance of real-time object detection in 3D point clouds

    The GOOSE Dataset for Perception in Unstructured Environments

    Full text link
    The potential for deploying autonomous systems can be significantly increased by improving the perception and interpretation of the environment. However, the development of deep learning-based techniques for autonomous systems in unstructured outdoor environments poses challenges due to limited data availability for training and testing. To address this gap, we present the German Outdoor and Offroad Dataset (GOOSE), a comprehensive dataset specifically designed for unstructured outdoor environments. The GOOSE dataset incorporates 10 000 labeled pairs of images and point clouds, which are utilized to train a range of state-of-the-art segmentation models on both image and point cloud data. We open source the dataset, along with an ontology for unstructured terrain, as well as dataset standards and guidelines. This initiative aims to establish a common framework, enabling the seamless inclusion of existing datasets and a fast way to enhance the perception capabilities of various robots operating in unstructured environments. The dataset, pre-trained models for offroad perception, and additional documentation can be found at https://goose-dataset.de/.Comment: Preprint; Submitted to IEEE for revie
    corecore