1,084 research outputs found

    POLAR3D: Augmenting NASA's POLAR Dataset for Data-Driven Lunar Perception and Rover Simulation

    Full text link
    We report on an effort that led to POLAR3D, a set of digital assets that enhance the POLAR dataset of stereo images generated by NASA to mimic lunar lighting conditions. Our contributions are twofold. First, we have annotated each photo in the POLAR dataset, providing approximately 23 000 labels for rocks and their shadows. Second, we digitized several lunar terrain scenarios available in the POLAR dataset. Specifically, by utilizing both the lunar photos and the POLAR's LiDAR point clouds, we constructed detailed obj files for all identifiable assets. POLAR3D is the set of digital assets comprising of rock/shadow labels and obj files associated with the digital twins of lunar terrain scenarios. This new dataset can be used for training perception algorithms for lunar exploration and synthesizing photorealistic images beyond the original POLAR collection. Likewise, the obj assets can be integrated into simulation environments to facilitate realistic rover operations in a digital twin of a POLAR scenario. POLAR3D is publicly available to aid perception algorithm development, camera simulation efforts, and lunar simulation exercises.POLAR3D is publicly available at https://github.com/uwsbel/POLAR-digital.Comment: 7 pages, 4 figures; this work has been submitted to the 2024 IEEE Conference on Robotics and Automation (ICRA) under revie

    Icy Moon Surface Simulation and Stereo Depth Estimation for Sampling Autonomy

    Full text link
    Sampling autonomy for icy moon lander missions requires understanding of topographic and photometric properties of the sampling terrain. Unavailability of high resolution visual datasets (either bird-eye view or point-of-view from a lander) is an obstacle for selection, verification or development of perception systems. We attempt to alleviate this problem by: 1) proposing Graphical Utility for Icy moon Surface Simulations (GUISS) framework, for versatile stereo dataset generation that spans the spectrum of bulk photometric properties, and 2) focusing on a stereo-based visual perception system and evaluating both traditional and deep learning-based algorithms for depth estimation from stereo matching. The surface reflectance properties of icy moon terrains (Enceladus and Europa) are inferred from multispectral datasets of previous missions. With procedural terrain generation and physically valid illumination sources, our framework can fit a wide range of hypotheses with respect to visual representations of icy moon terrains. This is followed by a study over the performance of stereo matching algorithms under different visual hypotheses. Finally, we emphasize the standing challenges to be addressed for simulating perception data assets for icy moons such as Enceladus and Europa. Our code can be found here: https://github.com/nasa-jpl/guiss.Comment: Software: https://github.com/nasa-jpl/guiss. IEEE Aerospace Conference 202

    Hyper-Spectral Image Analysis with Partially-Latent Regression and Spatial Markov Dependencies

    Get PDF
    Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.Comment: 12 pages, 4 figures, 3 table

    DribbleBot: Dynamic Legged Manipulation in the Wild

    Full text link
    DribbleBot (Dexterous Ball Manipulation with a Legged Robot) is a legged robotic system that can dribble a soccer ball under the same real-world conditions as humans (i.e., in-the-wild). We adopt the paradigm of training policies in simulation using reinforcement learning and transferring them into the real world. We overcome critical challenges of accounting for variable ball motion dynamics on different terrains and perceiving the ball using body-mounted cameras under the constraints of onboard computing. Our results provide evidence that current quadruped platforms are well-suited for studying dynamic whole-body control problems involving simultaneous locomotion and manipulation directly from sensory observations.Comment: To appear at the IEEE Conference on Robotics and Automation (ICRA), 2023. Video is available at https://gmargo11.github.io/dribblebot

    Autonomous navigation of a wheeled mobile robot in farm settings

    Get PDF
    This research is mainly about autonomously navigation of an agricultural wheeled mobile robot in an unstructured outdoor setting. This project has four distinct phases defined as: (i) Navigation and control of a wheeled mobile robot for a point-to-point motion. (ii) Navigation and control of a wheeled mobile robot in following a given path (path following problem). (iii) Navigation and control of a mobile robot, keeping a constant proximity distance with the given paths or plant rows (proximity-following). (iv) Navigation of the mobile robot in rut following in farm fields. A rut is a long deep track formed by the repeated passage of wheeled vehicles in soft terrains such as mud, sand, and snow. To develop reliable navigation approaches to fulfill each part of this project, three main steps are accomplished: literature review, modeling and computer simulation of wheeled mobile robots, and actual experimental tests in outdoor settings. First, point-to-point motion planning of a mobile robot is studied; a fuzzy-logic based (FLB) approach is proposed for real-time autonomous path planning of the robot in unstructured environment. Simulation and experimental evaluations shows that FLB approach is able to cope with different dynamic and unforeseen situations by tuning a safety margin. Comparison of FLB results with vector field histogram (VFH) and preference-based fuzzy (PBF) approaches, reveals that FLB approach produces shorter and smoother paths toward the goal in almost all of the test cases examined. Then, a novel human-inspired method (HIM) is introduced. HIM is inspired by human behavior in navigation from one point to a specified goal point. A human-like reasoning ability about the situations to reach a predefined goal point while avoiding any static, moving and unforeseen obstacles are given to the robot by HIM. Comparison of HIM results with FLB suggests that HIM is more efficient and effective than FLB. Afterward, navigation strategies are built up for path following, rut following, and proximity-following control of a wheeled mobile robot in outdoor (farm) settings and off-road terrains. The proposed system is composed of different modules which are: sensor data analysis, obstacle detection, obstacle avoidance, goal seeking, and path tracking. The capabilities of the proposed navigation strategies are evaluated in variety of field experiments; the results show that the proposed approach is able to detect and follow rows of bushes robustly. This action is used for spraying plant rows in farm field. Finally, obstacle detection and obstacle avoidance modules are developed in navigation system. These modules enables the robot to detect holes or ground depressions (negative obstacles), that are inherent parts of farm settings, and also over ground level obstacles (positive obstacles) in real-time at a safe distance from the robot. Experimental tests are carried out on two mobile robots (PowerBot and Grizzly) in outdoor and real farm fields. Grizzly utilizes a 3D-laser range-finder to detect objects and perceive the environment, and a RTK-DGPS unit for localization. PowerBot uses sonar sensors and a laser range-finder for obstacle detection. The experiments demonstrate the capability of the proposed technique in successfully detecting and avoiding different types of obstacles both positive and negative in variety of scenarios

    Automatic expert system based on images for accuracy crop row detection in maize fields

    Get PDF
    This paper proposes an automatic expert system for accuracy crop row detection in maize fields based on images acquired from a vision system. Different applications in maize, particularly those based on site specific treatments, require the identification of the crop rows. The vision system is designed with a defined geometry and installed onboard a mobile agricultural vehicle, i.e. submitted to vibrations, gyros or uncontrolled movements. Crop rows can be estimated by applying geometrical parameters under image perspective projection. Because of the above undesired effects, most often, the estimation results inaccurate as compared to the real crop rows. The proposed expert system exploits the human knowledge which is mapped into two modules based on image processing techniques. The first one is intended for separating green plants (crops and weeds) from the rest (soil, stones and others). The second one is based on the system geometry where the expected crop lines are mapped onto the image and then a correction is applied through the well-tested and robust Theil–Sen estimator in order to adjust them to the real ones. Its performance is favorably compared against the classical Pearson product–moment correlation coefficient

    O<sub>2</sub>SAT: Object-Oriented-Segmentation-Guided Spatial-Attention Network for 3D Object Detection in Autonomous Vehicles

    Get PDF
    Autonomous vehicles (AVs) strive to adapt to the specific characteristics of sustainable urban environments. Accurate 3D object detection with LiDAR is paramount for autonomous driving. However, existing research predominantly relies on the 3D object-based assumption, which overlooks the complexity of real-world road environments. Consequently, current methods experience performance degradation when targeting only local features and overlooking the intersection of objects and road features, especially in uneven road conditions. This study proposes a 3D Object-Oriented-Segmentation Spatial-Attention (O2SAT) approach to distinguish object points from road points and enhance the keypoint feature learning by a channel-wise spatial attention mechanism. O2SAT consists of three modules: Object-Oriented Segmentation (OOS), Spatial-Attention Feature Reweighting (SFR), and Road-Aware 3D Detection Head (R3D). OOS distinguishes object and road points and performs object-aware downsampling to augment data by learning to identify the hidden connection between landscape and object; SFR performs weight augmentation to learn crucial neighboring relationships and dynamically adjust feature weights through spatial attention mechanisms, which enhances the long-range interactions and contextual feature discrimination for noise suppression, improving overall detection performance; and R3D utilizes refined object segmentation and optimized feature representations. Our system forecasts prediction confidence into existing point-backbones. Our method’s effectiveness and robustness across diverse datasets (KITTI) has been demonstrated through vast experiments. The proposed modules seamlessly integrate into existing point-based frameworks, following a plug-and-play approach

    Comparison of 3D scan matching techniques for autonomous robot navigation in urban and agricultural environments

    Get PDF
    Global navigation satellite system (GNSS) is the standard solution for solving the localization problem in outdoor environments, but its signal might be lost when driving in dense urban areas or in the presence of heavy vegetation or overhanging canopies. Hence, there is a need for alternative or complementary localization methods for autonomous driving. In recent years, exteroceptive sensors have gained much attention due to significant improvements in accuracy and cost-effectiveness, especially for 3D range sensors. By registering two successive 3D scans, known as scan matching, it is possible to estimate the pose of a vehicle. This work aims to provide in-depth analysis and comparison of the state-of-the-art 3D scan matching approaches as a solution to the localization problem of autonomous vehicles. Eight techniques (deterministic and probabilistic) are investigated: iterative closest point (with three different embodiments), normal distribution transform, coherent point drift, Gaussian mixture model, support vector-parametrized Gaussian mixture and the particle filter implementation. They are demonstrated in long path trials in both urban and agricultural environments and compared in terms of accuracy and consistency. On the one hand, most of the techniques can be successfully used in urban scenarios with the probabilistic approaches that show the best accuracy. On the other hand, agricultural settings have proved to be more challenging with significant errors even in short distance trials due to the presence of featureless natural objects. The results and discussion of this work will provide a guide for selecting the most suitable method and will encourage building of improvements on the identified limitations.This project has been supported by the National Agency of Research and Development (ANID, ex-Conicyt) under Fondecyt grant 1201319, Basal grant FB0008, DGIIP-UTFSM Chile, National Agency for Research and Development (ANID)/PCHA/Doctorado Nacional/2020-21200700, Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya (grant 2017 SGR 646), the Span ish Ministry of Science, Innovation and Universities (project RTI2018- 094222-B-I00) for partially funding this research. The Spanish Ministry of Education is thanked for Mr. J. Gene’s pre-doctoral fellowships (FPU15/03355). We would also like to thank Nufri (especially Santiago Salamero and Oriol Morreres) for their support during data acquisitio

    3D Vision-based Perception and Modelling techniques for Intelligent Ground Vehicles

    Get PDF
    In this work the candidate proposes an innovative real-time stereo vision system for intelligent/autonomous ground vehicles able to provide a full and reliable 3D reconstruction of the terrain and the obstacles. The terrain has been computed using rational B-Splines surfaces performed by re-weighted iterative least square fitting and equalization. The cloud of 3D points, generated by the processing of the Disparity Space Image (DSI), is sampled into a 2.5D grid map; then grid points are iteratively fitted into rational B-Splines surfaces with different patterns of control points and degrees, depending on traversability consideration. The obtained surface also represents a segmentation of the initial 3D points into terrain inliers and outliers. As final contribution, a new obstacle detection approach is presented, combined with terrain estimation system, in order to model stationary and moving objects in the most challenging scenarios
    corecore