134 research outputs found

    Hyper-Drive: Visible-Short Wave Infrared Hyperspectral Imaging Datasets for Robots in Unstructured Environments

    Full text link
    Hyperspectral sensors have enjoyed widespread use in the realm of remote sensing; however, they must be adapted to a format in which they can be operated onboard mobile robots. In this work, we introduce a first-of-its-kind system architecture with snapshot hyperspectral cameras and point spectrometers to efficiently generate composite datacubes from a robotic base. Our system collects and registers datacubes spanning the visible to shortwave infrared (660-1700 nm) spectrum while simultaneously capturing the ambient solar spectrum reflected off a white reference tile. We collect and disseminate a large dataset of more than 500 labeled datacubes from on-road and off-road terrain compliant with the ATLAS ontology to further the integration and demonstration of hyperspectral imaging (HSI) as beneficial in terrain class separability. Our analysis of this data demonstrates that HSI is a significant opportunity to increase understanding of scene composition from a robot-centric context. All code and data are open source online: https://river-lab.github.io/hyper_drive_dat

    Sampling-Based Exploration Strategies for Mobile Robot Autonomy

    Get PDF
    A novel, sampling-based exploration strategy is introduced for Unmanned Ground Vehicles (UGV) to efficiently map large GPS-deprived underground environments. It is compared to state-of-the-art approaches and performs on a similar level, while it is not designed for a specific robot or sensor configuration like the other approaches. The introduced exploration strategy, which is called Random-Sampling-Based Next-Best View Exploration (RNE), uses a Rapidly-exploring Random Graph (RRG) to find possible view points in an area around the robot. They are compared with a computation-efficient Sparse Ray Polling (SRP) in a voxel grid to find the next-best view for the exploration. Each node in the exploration graph built with RRG is evaluated regarding the ability of the UGV to traverse it, which is derived from an occupancy grid map. It is also used to create a topology-based graph where nodes are placed centrally to reduce the risk of collisions and increase the amount of observable space. Nodes that fall outside the local exploration area are stored in a global graph and are connected with a Traveling Salesman Problem solver to explore them later

    Autonomisten metsäkoneiden koneaistijärjestelmät

    Get PDF
    A prerequisite for increasing the autonomy of forest machinery is to provide robots with digital situational awareness, including a representation of the surrounding environment and the robot's own state in it. Therefore, this article-based dissertation proposes perception systems for autonomous or semi-autonomous forest machinery as a summary of seven publications. The work consists of several perception methods using machine vision, lidar, inertial sensors, and positioning sensors. The sensors are used together by means of probabilistic sensor fusion. Semi-autonomy is interpreted as a useful intermediary step, situated between current mechanized solutions and full autonomy, to assist the operator. In this work, the perception of the robot's self is achieved through estimation of its orientation and position in the world, the posture of its crane, and the pose of the attached tool. The view around the forest machine is produced with a rotating lidar, which provides approximately equal-density 3D measurements in all directions. Furthermore, a machine vision camera is used for detecting young trees among other vegetation, and sensor fusion of an actuated lidar and machine vision camera is utilized for detection and classification of tree species. In addition, in an operator-controlled semi-autonomous system, the operator requires a functional view of the data around the robot. To achieve this, the thesis proposes the use of an augmented reality interface, which requires measuring the pose of the operator's head-mounted display in the forest machine cabin. Here, this work adopts a sensor fusion solution for a head-mounted camera and inertial sensors. In order to increase the level of automation and productivity of forest machines, the work focuses on scientifically novel solutions that are also adaptable for industrial use in forest machinery. Therefore, all the proposed perception methods seek to address a real existing problem within current forest machinery. All the proposed solutions are implemented in a prototype forest machine and field tested in a forest. The proposed methods include posture measurement of a forestry crane, positioning of a freely hanging forestry crane attachment, attitude estimation of an all-terrain vehicle, positioning a head mounted camera in a forest machine cabin, detection of young trees for point cleaning, classification of tree species, and measurement of surrounding tree stems and the ground surface underneath.Metsäkoneiden autonomia-asteen kasvattaminen edellyttää, että robotilla on digitaalinen tilannetieto sekä ympäristöstä että robotin omasta toiminnasta. Tämän saavuttamiseksi työssä on kehitetty autonomisen tai puoliautonomisen metsäkoneen koneaistijärjestelmiä, jotka hyödyntävät konenäkö-, laserkeilaus- ja inertia-antureita sekä paikannusantureita. Työ liittää yhteen seitsemässä artikkelissa toteutetut havainnointimenetelmät, joissa useiden anturien mittauksia yhdistetään sensorifuusiomenetelmillä. Työssä puoliautonomialla tarkoitetaan hyödyllisiä kuljettajaa avustavia välivaiheita nykyisten mekanisoitujen ratkaisujen ja täyden autonomian välillä. Työssä esitettävissä autonomisen metsäkoneen koneaistijärjestelmissä koneen omaa toimintaa havainnoidaan estimoimalla koneen asentoa ja sijaintia, nosturin asentoa sekä siihen liitetyn työkalun asentoa suhteessa ympäristöön. Yleisnäkymä metsäkoneen ympärille toteutetaan pyörivällä laserkeilaimella, joka tuottaa lähes vakiotiheyksisiä 3D-mittauksia jokasuuntaisesti koneen ympäristöstä. Nuoret puut tunnistetaan muun kasvillisuuden joukosta käyttäen konenäkökameraa. Lisäksi puiden tunnistamisessa ja puulajien luokittelussa käytetään konenäkökameraa ja laserkeilainta yhdessä sensorifuusioratkaisun avulla. Lisäksi kuljettajan ohjaamassa puoliautonomisessa järjestelmässä kuljettaja tarvitsee toimivan tavan ymmärtää koneen tuottaman mallin ympäristöstä. Työssä tämä ehdotetaan toteutettavaksi lisätyn todellisuuden käyttöliittymän avulla, joka edellyttää metsäkoneen ohjaamossa istuvan kuljettajan lisätyn todellisuuden lasien paikan ja asennon mittaamista. Työssä se toteutetaan kypärään asennetun kameran ja inertia-anturien sensorifuusiona. Jotta metsäkoneiden automatisaatiotasoa ja tuottavuutta voidaan lisätä, työssä keskitytään uusiin tieteellisiin ratkaisuihin, jotka soveltuvat teolliseen käyttöön metsäkoneissa. Kaikki esitetyt koneaistijärjestelmät pyrkivät vastaamaan todelliseen olemassa olevaan tarpeeseen nykyisten metsäkoneiden käytössä. Siksi kaikki menetelmät on implementoitu prototyyppimetsäkoneisiin ja tulokset on testattu metsäympäristössä. Työssä esitetyt menetelmät mahdollistavat metsäkoneen nosturin, vapaasti riippuvan työkalun ja ajoneuvon asennon estimoinnin, lisätyn todellisuuden lasien asennon mittaamisen metsäkoneen ohjaamossa, nuorten puiden havaitsemisen reikäperkauksessa, ympäröivien puiden puulajien tunnistuksen, sekä puun runkojen ja maanpinnan mittauksen

    Design and Evaluation of Motion Planners for Quadrotors

    Full text link
    The field of quadrotor motion planning has experienced significant advancements over the last decade. Most successful approaches rely on two stages: a front-end that determines the best path by incorporating geometric (and in some cases kinematic or input) constraints, that effectively specify the homotopy class of the trajectory; and a back-end that optimizes the path with a suitable objective function, constrained by the robot's dynamics as well as state/input constraints. However, there is no systematic approach or design guidelines to design both the front and the back ends for a wide range of environments, and no literature evaluates the performance of the trajectory planning algorithm with varying degrees of environment complexity. In this paper, we propose a modular approach to designing the software planning stack and offer a parameterized set of environments to systematically evaluate the performance of two-stage planners. Our parametrized environments enable us to access different front and back-end planners as a function of environmental clutter and complexity. We use simulation and experimental results to demonstrate the performance of selected planning algorithms across a range of environments. Finally, we open source the planning/evaluation stack and parameterized environments to facilitate more in-depth studies of quadrotor motion planning, available at https://github.com/KumarRobotics/kr_mp_desig

    Field Testing of a Stochastic Planner for ASV Navigation Using Satellite Images

    Full text link
    We introduce a multi-sensor navigation system for autonomous surface vessels (ASV) intended for water-quality monitoring in freshwater lakes. Our mission planner uses satellite imagery as a prior map, formulating offline a mission-level policy for global navigation of the ASV and enabling autonomous online execution via local perception and local planning modules. A significant challenge is posed by the inconsistencies in traversability estimation between satellite images and real lakes, due to environmental effects such as wind, aquatic vegetation, shallow waters, and fluctuating water levels. Hence, we specifically modelled these traversability uncertainties as stochastic edges in a graph and optimized for a mission-level policy that minimizes the expected total travel distance. To execute the policy, we propose a modern local planner architecture that processes sensor inputs and plans paths to execute the high-level policy under uncertain traversability conditions. Our system was tested on three km-scale missions on a Northern Ontario lake, demonstrating that our GPS-, vision-, and sonar-enabled ASV system can effectively execute the mission-level policy and disambiguate the traversability of stochastic edges. Finally, we provide insights gained from practical field experience and offer several future directions to enhance the overall reliability of ASV navigation systems.Comment: 33 pages, 20 figures. Project website https://pcctp.github.io. arXiv admin note: text overlap with arXiv:2209.1186

    Recovery Policies for Safe Exploration of Lunar Permanently Shadowed Regions by a Solar-Powered Rover

    Full text link
    The success of a multi-kilometre drive by a solar-powered rover at the lunar south pole depends upon careful planning in space and time due to highly dynamic solar illumination conditions. An additional challenge is that the rover may be subject to random faults that can temporarily delay long-range traverses. The majority of existing global spatiotemporal planners assume a deterministic rover-environment model and do not account for random faults. In this paper, we consider a random fault profile with a known, average spatial fault rate. We introduce a methodology to compute recovery policies that maximize the probability of survival of a solar-powered rover from different start states. A recovery policy defines a set of recourse actions to reach a safe location with sufficient battery energy remaining, given the local solar illumination conditions. We solve a stochastic reach-avoid problem using dynamic programming to find an optimal recovery policy. Our focus, in part, is on the implications of state space discretization, which is required in practical implementations. We propose a modified dynamic programming algorithm that conservatively accounts for approximation errors. To demonstrate the benefits of our approach, we compare against existing methods in scenarios where a solar-powered rover seeks to safely exit from permanently shadowed regions in the Cabeus area at the lunar south pole. We also highlight the relevance of our methodology for mission formulation and trade safety analysis by comparing different rover mobility models in simulated recovery drives from the LCROSS impact region.Comment: In Acta Astronautica, vol. 213, pp. 708-724, Dec. 202

    Adaptive Technique for Contrast Enhancement of Leading Vehicle Tracks

    Get PDF
    During movement in various unpaved terrain conditions, the track impressions left over by the leading vehicles provide guiding and safe routes in the area. The delineation of these tracks captured by the images can extend immense support for guidance in real time. These tracks that look like edges in coarse-resolution images take the shape of elongated areas in fine-resolution images. In such a scenario, the high pass and edge detection filters give limited information to delineate these tracks passing through different surroundings. However, the distinct texture of these tracks assists in the delineation of these tracks from their surroundings. Gray level co-occurrence matrix (GLCM) representing the spatial relation of pixels is employed here to define the texture. The authors investigated the influence of different resolutions on the distinguishability of these tracks. The study revealed that texture plays an increasing role in distinguishing objects as the image resolution improves. The texture analysis extended to investigate the track impressions left over by the leading vehicle brings out an ample scope in delineating these tracks. The measures could improve the track contrast even better than conventional techniques. To select the most optimal contrast enhancement measure in a given scenario, authors proposed a quantified measure of track index. An investigation is made on the difference-based track index (TI) representing the mean contrast value of the track vis-à-vis off-track areas. The results show an increase in the quantified contrast from 7.83 per cent to 29.06 per cent. The proposed technique highlights the image with the highest track contrast in a given scenario. The study can lead to onboard decision-making for the rut following vehicles moving in low-contrast terrain

    VAPOR: Legged Robot Navigation in Outdoor Vegetation Using Offline Reinforcement Learning

    Full text link
    We present VAPOR, a novel method for autonomous legged robot navigation in unstructured, densely vegetated outdoor environments using offline Reinforcement Learning (RL). Our method trains a novel RL policy using an actor-critic network and arbitrary data collected in real outdoor vegetation. Our policy uses height and intensity-based cost maps derived from 3D LiDAR point clouds, a goal cost map, and processed proprioception data as state inputs, and learns the physical and geometric properties of the surrounding obstacles such as height, density, and solidity/stiffness. The fully-trained policy's critic network is then used to evaluate the quality of dynamically feasible velocities generated from a novel context-aware planner. Our planner adapts the robot's velocity space based on the presence of entrapment inducing vegetation, and narrow passages in dense environments. We demonstrate our method's capabilities on a Spot robot in complex real-world outdoor scenes, including dense vegetation. We observe that VAPOR's actions improve success rates by up to 40%, decrease the average current consumption by up to 2.9%, and decrease the normalized trajectory length by up to 11.2% compared to existing end-to-end offline RL and other outdoor navigation methods

    ConvBKI: Real-Time Probabilistic Semantic Mapping Network with Quantifiable Uncertainty

    Full text link
    In this paper, we develop a modular neural network for real-time semantic mapping in uncertain environments, which explicitly updates per-voxel probabilistic distributions within a neural network layer. Our approach combines the reliability of classical probabilistic algorithms with the performance and efficiency of modern neural networks. Although robotic perception is often divided between modern differentiable methods and classical explicit methods, a union of both is necessary for real-time and trustworthy performance. We introduce a novel Convolutional Bayesian Kernel Inference (ConvBKI) layer which incorporates semantic segmentation predictions online into a 3D map through a depthwise convolution layer by leveraging conjugate priors. We compare ConvBKI against state-of-the-art deep learning approaches and probabilistic algorithms for mapping to evaluate reliability and performance. We also create a Robot Operating System (ROS) package of ConvBKI and test it on real-world perceptually challenging off-road driving data.Comment: arXiv admin note: text overlap with arXiv:2209.1066

    Snake and Snake Robot Locomotion in Complex, 3-D Terrain

    Get PDF
    Snakes are able to traverse almost all types of environments by bending their elongate bodies in three dimensions to interact with the terrain. Similarly, a snake robot is a promising platform to perform critical tasks in various environments. Understanding how 3-D body bending effectively interacts with the terrain for propulsion and stability can not only inform how snakes move through natural environments, but also inspire snake robots to achieve similar performance to facilitate humans. How snakes and snake robots move on flat surfaces has been understood relatively well in previous studies. However, such ideal terrain is rare in natural environments and little was understood about how to generate propulsion and maintain stability when large height variations occur, except for some qualitative descriptions of arboreal snake locomotion and a few robots using geometric planning. To bridge this knowledge gap, in this dissertation research we integrated animal experiments and robotic studies in three representative environments: a large smooth step, an uneven arena of blocks of large height variation, and large bumps. We discovered that vertical body bending induces stability challenges but can generate large propulsion. When traversing a large smooth step, a snake robot is challenged by roll instability that increases with larger vertical body bending because of a higher center of mass. The instability can be reduced by body compliance that statistically increases surface contact. Despite the stability challenge, vertical body bending can potentially allow snakes to push against terrain for propulsion similar to lateral body bending, as demonstrated by corn snakes traversing an uneven arena. This ability to generate large propulsion was confirmed on a robot if body-terrain contact is well maintained. Contact feedback control can help the strategy accommodate perturbations such as novel terrain geometry or excessive external forces by helping the body regain lost contact. Our findings provide insights into how snakes and snake robots can use vertical body bending for efficient and versatile traversal of the three-dimensional world while maintaining stability
    corecore