67,645 research outputs found

    REMOTE SENSING APPROACH FOR TERRAMECHANICS APPLICATIONS UTILIZING MACHINE AND DEEP LEARNING

    Get PDF
    Terrain traversability is critical for developing Go/No Go maps, significantly impacting a mission\u27s success. To predict the mobility of a vehicle over a terrain, one must understand the soil characteristics. In situ measurements performed by soldiers in the field are the current method of collecting this information, which is time-consuming, are only point measurements, and can put soldiers in harm\u27s way. Therefore, this study investigates using remote sensing as an alternative approach to characterize terrain properties. This approach will explore the relationships between electromagnetic radiation and soil types with varying properties. Optical, thermal, and hyperspectral sensors will be used to collect remote data and compare it against ground truth measurements for validation. Machine learning (linear, ridge, lasso, partial least squares, support vector machines, and k nearest neighbors) and deep learning (multi-layer perceptron and convolutional neural network) algorithms will be used to build prediction models. Results showed that soil properties such as soil gradation, moisture content, and soil strength measured by a geogauge and averaged cone penetrometer for 0-6” and 0-12” (CP06 and CP12) can be estimated remotely. Deep learning provides the best models for estimating terrain characteristics compared to machine learning. It is shown that this method can produce much finer spatial resolution coverage than traditional geospatial point-based interpolation approaches and yield a higher prediction accuracy. Predictions maps can be used to generate threshold-based Go / No Go maps using a vehicle cone index or as a cost map for vehicle performance. A Polaris MRZR vehicle was used to test the application of these prediction maps for mobility purposes, and correlations were observed between the CP06 and rear wheel slip and CP12 and vehicle speed. This study demonstrates the potential of using remote sensing data for more rapid and finer spatial resolution predictions of terrain properties with higher accuracies compared to traditional in situ mapping methods implementing machine and deep learning algorithms. The remote sensing approach allows the generation of Go/No Go and vehicle cost maps and, most importantly, provides a safe alternative to keep soldiers out of harm’s way

    Learning Ground Traversability from Simulations

    Full text link
    Mobile ground robots operating on unstructured terrain must predict which areas of the environment they are able to pass in order to plan feasible paths. We address traversability estimation as a heightmap classification problem: we build a convolutional neural network that, given an image representing the heightmap of a terrain patch, predicts whether the robot will be able to traverse such patch from left to right. The classifier is trained for a specific robot model (wheeled, tracked, legged, snake-like) using simulation data on procedurally generated training terrains; the trained classifier can be applied to unseen large heightmaps to yield oriented traversability maps, and then plan traversable paths. We extensively evaluate the approach in simulation on six real-world elevation datasets, and run a real-robot validation in one indoor and one outdoor environment.Comment: Webpage: http://romarcg.xyz/traversability_estimation

    Featureless visual processing for SLAM in changing outdoor environments

    Get PDF
    Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features

    A Neural Network Method for Efficient Vegetation Mapping

    Full text link
    This paper describes the application of a neural network method designed to improve the efficiency of map production from remote sensing data. Specifically, the ARTMAP neural network produces vegetation maps of the Sierra National Forest, in Northern California, using Landsat Thematic Mapper (TM) data. In addition to spectral values, the data set includes terrain and location information for each pixel. The maps produced by ARTMAP are of comparable accuracy to maps produced by a currently used method, which requires expert knowledge of the area as well as extensive manual editing. In fact, once field observations of vegetation classes had been collected for selected sites, ARTMAP took only a few hours to accomplish a mapping task that had previously taken many months. The ARTMAP network features fast on-line learning, so the system can be updated incrementally when new field observations arrive, without the need for retraining on the entire data set. In addition to maps that identify lifeform and Calveg species, ARTMAP produces confidence maps, which indicate where errors are most likely to occur and which can, therefore, be used to guide map editing

    Viability of commercial depth sensors for the REX medical exoskeleton : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University, Albany, New Zealand

    Get PDF
    Closing the feedback loop of machine control has been a known method for gaining stability. Medical exoskeletons are no exception to this phenomenon. It is proposed that through machine vision, their stability control can be enhanced in a commercially viable manner. Using machines to enhance human’s capabilities has been a concept tried since the 19th century, with a range of successful demonstrations since then such as the REX platform. In parallel, machine vision has progressed similarly, and while applications that could be considered to be synonymous have been researched, using computer vision for traversability analysis in medical exoskeletons still leaves a lot of questions unanswered. These works attempt to understand better this field, in particular, the commercial viability of machine vision system’s ability to enhance medical exoskeletons. The key method to determine this will be through implementation. A system is designed that considers the constraints of working with a commercial product, demonstrating integration into an existing system without significant alterations. It shows using a stereo vision system to gather depth information from the surroundings and amalgamate these. The amalgamation process relies on tracking movement to provide accurate transforms between time-frames in the threedimensional world. Visual odometry and ground plane detection is employed to achieve this, enabling the creation of digital elevation maps, to efficiently capture and present information about the surroundings. Further simplification of this information is accomplished by creating traversability maps; that directly relate the terrain to whether the REX device can safely navigate that location. Ultimately a link is formed between the REX device and these maps, and that enables user movement commands to be intercepted. Once intercepted, a binary decision is computed whether that movement will traverse safe terrain. If however the command is deemed unsafe (for example stepping backwards off a ledge), this will not be permitted, hence increasing patient safety. Results suggest that this end-to-end demonstration is capable of improving patient safety; however, plenty of future work and considerations are discussed. The underlying data quality provided by the stereo sensor is questioned, and the limitations of macro vs. micro applicability to the REX are identified. That is; the works presented are capable of working on a macro level, but in their current state lack the finer detail to improve patient safety when operating a REX medical exoskeleton considerably
    • …
    corecore