47 research outputs found

    Planetary Rover Simulation for Lunar Exploration Missions

    Get PDF
    When planning planetary rover missions it is useful to develop intuition and skills driving in, quite literally, alien environments before incurring the cost of reaching said locales. Simulators make it possible to operate in environments that have the physical characteristics of target locations without the expense and overhead of extensive physical tests. To that end, NASA Ames and Open Robotics collaborated on a Lunar rover driving simulator based on the open source Gazebo simulation platform and leveraging ROS (Robotic Operating System) components. The simulator was integrated with research and mission software for rover driving, system monitoring, and science instrument simulation to constitute an end-to-end Lunar mission simulation capability. Although we expect our simulator to be applicable to arbitrary Lunar regions, we designed to a reference mission of prospecting in polar regions. The harsh lighting and low illumination angles at the Lunar poles combine with the unique reflectance properties of Lunar regolith to present a challenging visual environment for both human and computer perception. Our simulator placed an emphasis on high fidelity visual simulation in order to produce synthetic imagery suitable for evaluating human rover drivers with navigation tasks, as well as providing test data for computer vision software development.In this paper, we describe the software used to construct the simulated Lunar environment and the components of the driving simulation. Our synthetic terrain generation software artificially increases the resolution of Lunar digital elevation maps by fractal synthesis and inserts craters and rocks based on Lunar size-frequency distribution models. We describe the necessary enhancements to import large scale, high resolution terrains into Gazebo, as well as our approach to modeling the visual environment of the Lunar surface. An overview of the mission software system is provided, along with how ROS was used to emulate flight software components that had not been developed yet. Finally, we discuss the effect of using the high-fidelity synthetic Lunar images for visual odometry. We also characterize the wheel slip model, and find some inconsistencies in the produced wheel slip behaviour

    Traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data

    Get PDF
    Scene perception and traversability analysis are real challenges for autonomous driving systems. In the context of off-road autonomy, there are additional challenges due to the unstructured environments and the existence of various vegetation types. It is necessary for the Autonomous Ground Vehicles (AGVs) to be able to identify obstacles and load-bearing surfaces in the terrain to ensure a safe navigation (McDaniel et al. 2012). The presence of vegetation in off-road autonomy applications presents unique challenges for scene understanding: 1) understory vegetation makes it difficult to detect obstacles or to identify load-bearing surfaces; and 2) trees are usually regarded as obstacles even though only trunks of the trees pose collision risk in navigation. The overarching goal of this dissertation was to study traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data. More specifically, to address the aforementioned challenges, this dissertation studied the impacts of the understory vegetation density on the solid obstacle detection performance of the off-road autonomous systems. By leveraging a physics-based autonomous driving simulator, a classification-based machine learning framework was proposed for obstacle detection based on point cloud data captured by LIDAR. Features were extracted based on a cumulative approach meaning that information related to each feature was updated at each timeframe when new data was collected by LIDAR. It was concluded that the increase in the density of understory vegetation adversely affected the classification performance in correctly detecting solid obstacles. Additionally, a regression-based framework was proposed for estimating the understory vegetation density for safe path planning purposes according to which the traversabilty risk level was regarded as a function of estimated density. Thus, the denser the predicted density of an area, the higher the risk of collision if the AGV traversed through that area. Finally, for the trees in the terrain, the dissertation investigated statistical features that can be used in machine learning algorithms to differentiate trees from solid obstacles in the context of forested off-road scenes. Using the proposed extracted features, the classification algorithm was able to generate high precision results for differentiating trees from solid obstacles. Such differentiation can result in more optimized path planning in off-road applications

    Toward Human-Like Automated Driving: Learning Spacing Profiles From Human Driving Data

    Get PDF
    For automated driving vehicles to be accepted by their users and safely integrate with traffic involving human drivers, they need to act and behave like human drivers. This not only involves understanding how the human driver or occupant in the automated vehicle expects their vehicle to operate, but also involves how other road users perceive the automated vehicle’s intentions. This research aimed at learning how drivers space themselves while driving around other vehicles. It is shown that an optimized lane change maneuver does create a solution that is much different than what a human would do. There is a need to learn complex driving preferences from studying human drivers. This research fills the gap in terms of learning human driving styles by providing an example of learned behavior (vehicle spacing) and the needed framework for encapsulating the learned data. A complete framework from problem formulation to data gathering and learning from human driving data was formulated as part of this research. On-road vehicle data were gathered while a human driver drove a vehicle. The driver was asked to make lane changes for stationary vehicles in his path with various road curvature conditions and speeds. The gathered data, as well as Learning from Demonstration techniques, were used in formulating the spacing profile as a lane change maneuver. A concise feature set from captured data was identified to strongly represent a driver’s spacing profile and a model was developed. The learned model represented the driver’s spacing profile from stationary vehicles within acceptable statistical tolerance. This work provides a methodology for many other scenarios from which human-like driving style and related parameters can be learned and applied to automated vehicle

    Technological measures of forefront road identification for vehicle comfort and safety improvement

    Get PDF
    This paper presents the technological measures currently being developed at institutes and vehicle research centres dealing with forefront road identification. In this case, road identification corresponds with the surface irregularities and road surface type, which are evaluated by laser scanning and image analysis. Real-time adaptation, adaptation in advance and system external informing are stated as sequential generations of vehicle suspension and active braking systems where road identification is significantly important. Active and semi-active suspensions with their adaptation technologies for comfort and road holding characteristics are analysed. Also, an active braking system such as Anti-lock Braking System (ABS) and Autonomous Emergency Braking (AEB) have been considered as very sensitive to the road friction state. Artificial intelligence methods of deep learning have been presented as a promising image analysis method for classification of 12 different road surface types. Concluding the achieved benefit of road identification for traffic safety improvement is presented with reference to analysed research reports and assumptions made after the initial evaluation

    System Design, Motion Modelling and Planning for a Recon figurable Wheeled Mobile Robot

    Get PDF
    Over the past ve decades the use of mobile robotic rovers to perform in-situ scienti c investigations on the surfaces of the Moon and Mars has been tremendously in uential in shaping our understanding of these extraterrestrial environments. As robotic missions have evolved there has been a greater desire to explore more unstructured terrain. This has exposed mobility limitations with conventional rover designs such as getting stuck in soft soil or simply not being able to access rugged terrain. Increased mobility and terrain traversability are key requirements when considering designs for next generation planetary rovers. Coupled with these requirements is the need to autonomously navigate unstructured terrain by taking full advantage of increased mobility. To address these issues, a high degree-of-freedom recon gurable platform that is capable of energy intensive legged locomotion in obstacle-rich terrain as well as wheeled locomotion in benign terrain is proposed. The complexities of the planning task that considers the high degree-of-freedom state space of this platform are considerable. A variant of asymptotically optimal sampling-based planners that exploits the presence of dominant sub-spaces within a recon gurable mobile robot's kinematic structure is proposed to increase path quality and ensure platform safety. The contributions of this thesis include: the design and implementation of a highly mobile planetary analogue rover; motion modelling of the platform to enable novel locomotion modes, along with experimental validation of each of these capabilities; the sampling-based HBFMT* planner that hierarchically considers sub-spaces to better guide search of the complete state space; and experimental validation of the planner with the physical platform that demonstrates how the planner exploits the robot's capabilities to uidly transition between various physical geometric con gurations and wheeled/legged locomotion modes

    Machine Learning and Neutron Sensing in Mobile Nuclear Threat Detection

    Get PDF
    A proof of concept (PoC) neutron/gamma-ray mobile threat detection system was constructed at Oak Ridge National Laboratory. This device, the Dual Detection Localization and Identification (DDLI) system, was designed to detect threat sources at standoff distance using neutron and gamma ray coded aperture imaging. A major research goal of the project was to understand the benefit of neutron sensing in the mobile threat search scenario. To this end, a series of mobile measurements were conducted with the completed DDLI PoC. These measurements indicated that high detection rates would be possible using neutron counting alone in a fully instrumented system. For a 280,000 neutrons per second Cf-252 source placed 15.9 meters away, a 4σ [sigma] detection rate of 99.3% was expected at 5 m/s. These results support the conclusion that neutron sensing enhances the detection capabilities of systems like the DDLI when compared to gamma-only platforms. Advanced algorithms were also investigated to fuse neutron and gamma coded aperture images and suppress background. In a simulated 1-D coded aperture imaging study, machine learning algorithms using both neutron and gamma ray data outperformed gamma-only threshold methods for alarming on weapons grade plutonium. In a separate study, a Random Forest classifier was trained on a source injection dataset from the Large Area Imager, a mobile gamma ray coded aperture system. Geant4 simulations of weapons-grade plutonium (WGPu) were combined with background data measured by the Large Area Imager to create nearly 4000 coded aperture images. At 30 meter standoff and 10 m/s, the Random Forest classifier was able to detect WGPu with error rates as low as 0.65% without spectroscopic information. A background subtracting filter further reduced this error rate to 0.2%. Finally, a background subtraction method based on principal component analysis was shown to improve detection by over 150% in figure of merit

    NASA Tech Briefs, September 2012

    Get PDF
    Topics covered include: Beat-to-Beat Blood Pressure Monitor; Measurement Techniques for Clock Jitter; Lightweight, Miniature Inertial Measurement System; Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts; Fuel Cell/Electrochemical Cell Voltage Monitor; Anomaly Detection Techniques with Real Test Data from a Spinning Turbine Engine-Like Rotor; Measuring Air Leaks into the Vacuum Space of Large Liquid Hydrogen Tanks; Antenna Calibration and Measurement Equipment; Glass Solder Approach for Robust, Low-Loss, Fiber-to-Waveguide Coupling; Lightweight Metal Matrix Composite Segmented for Manufacturing High-Precision Mirrors; Plasma Treatment to Remove Carbon from Indium UV Filters; Telerobotics Workstation (TRWS) for Deep Space Habitats; Single-Pole Double-Throw MMIC Switches for a Microwave Radiometer; On Shaft Data Acquisition System (OSDAS); ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays; Flexible Architecture for FPGAs in Embedded Systems; Polyurea-Based Aerogel Monoliths and Composites; Resin-Impregnated Carbon Ablator: A New Ablative Material for Hyperbolic Entry Speeds; Self-Cleaning Particulate Prefilter Media; Modular, Rapid Propellant Loading System/Cryogenic Testbed; Compact, Low-Force, Low-Noise Linear Actuator; Loop Heat Pipe with Thermal Control Valve as a Variable Thermal Link; Process for Measuring Over-Center Distances; Hands-Free Transcranial Color Doppler Probe; Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs; Developing Physiologic Models for Emergency Medical Procedures Under Microgravity; PMA-Linked Fluorescence for Rapid Detection of Viable Bacterial Endospores; Portable Intravenous Fluid Production Device for Ground Use; Adaptation of a Filter Assembly to Assess Microbial Bioburden of Pressurant Within a Propulsion System; Multiplexed Force and Deflection Sensing Shell Membranes for Robotic Manipulators; Whispering Gallery Mode Optomechanical Resonator; Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles; Self-Sealing Wet Chemistry Cell for Field Analysis; General MACOS Interface for Modeling and Analysis for Controlled Optical Systems; Mars Technology Rover with Arm-Mounted Percussive Coring Tool, Microimager, and Sample-Handling Encapsulation Containerization Subsystem; Fault-Tolerant, Real-Time, Multi-Core Computer System; Water Detection Based on Object Reflections; SATPLOT for Analysis of SECCHI Heliospheric Imager Data; Plug-in Plan Tool v3.0.3.1; Frequency Correction for MIRO Chirp Transformation Spectroscopy Spectrum; Nonlinear Estimation Approach to Real-Time Georegistration from Aerial Images; Optimal Force Control of Vibro-Impact Systems for Autonomous Drilling Applications; Low-Cost Telemetry System for Small/Micro Satellites; Operator Interface and Control Software for the Reconfigurable Surface System Tri-ATHLETE; and Algorithms for Determining Physical Responses of Structures Under Load

    A comprehensive survey of unmanned ground vehicle terrain traversability for unstructured environments and sensor technology insights

    Get PDF
    This article provides a detailed analysis of the assessment of unmanned ground vehicle terrain traversability. The analysis is categorized into terrain classification, terrain mapping, and cost-based traversability, with subcategories of appearance-based, geometry-based, and mixed-based methods. The article also explores the use of machine learning (ML), deep learning (DL) and reinforcement learning (RL) and other based end-to-end methods as crucial components for advanced terrain traversability analysis. The investigation indicates that a mixed approach, incorporating both exteroceptive and proprioceptive sensors, is more effective, optimized, and reliable for traversability analysis. Additionally, the article discusses the vehicle platforms and sensor technologies used in traversability analysis, making it a valuable resource for researchers in the field. Overall, this paper contributes significantly to the current understanding of traversability analysis in unstructured environments and provides insights for future sensor-based research on advanced traversability analysis

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div
    corecore