5,484 research outputs found

    Adaptive and intelligent navigation of autonomous planetary rovers - A survey

    Get PDF
    The application of robotics and autonomous systems in space has increased dramatically. The ongoing Mars rover mission involving the Curiosity rover, along with the success of its predecessors, is a key milestone that showcases the existing capabilities of robotic technology. Nevertheless, there has still been a heavy reliance on human tele-operators to drive these systems. Reducing the reliance on human experts for navigational tasks on Mars remains a major challenge due to the harsh and complex nature of the Martian terrains. The development of a truly autonomous rover system with the capability to be effectively navigated in such environments requires intelligent and adaptive methods fitting for a system with limited resources. This paper surveys a representative selection of work applicable to autonomous planetary rover navigation, discussing some ongoing challenges and promising future research directions from the perspectives of the authors

    Watch Your Step! Terrain Traversability for Robot Control

    Get PDF
    Watch your step! Or perhaps, watch your wheels. Whatever the robot is, if it puts its feet, tracks, or wheels in the wrong place, it might get hurt; and as robots are quickly going from structured and completely known environments towards uncertain and unknown terrain, the surface assessment becomes an essential requirement. As a result, future mobile robots cannot neglect the evaluation of terrain’s structure, according to their driving capabilities. With the objective of filling this gap, the focus of this study was laid on terrain analysis methods, which can be used for robot control with particular reference to autonomous vehicles and mobile robots. Giving an overview of theory related to this topic, the investigation not only covers hardware, such as visual sensors or laser scanners, but also space descriptions, such as digital elevation models and point descriptors, introducing new aspects and characterization of terrain assessment. During the discussion, a wide number of examples and methodologies are exposed according to different tools and sensors, including the description of a recent method of terrain assessment using normal vectors analysis. Indeed, normal vectors has demonstrated great potentialities in the field of terrain irregularity assessment in both on‐road and off‐road environments

    Comparative Study of Different Methods in Vibration-Based Terrain Classification for Wheeled Robots with Shock Absorbers

    Get PDF
    open access articleAutonomous robots that operate in the field can enhance their security and efficiency by accurate terrain classification, which can be realized by means of robot-terrain interaction-generated vibration signals. In this paper, we explore the vibration-based terrain classification (VTC), in particular for a wheeled robot with shock absorbers. Because the vibration sensors are usually mounted on the main body of the robot, the vibration signals are dampened significantly, which results in the vibration signals collected on different terrains being more difficult to discriminate. Hence, the existing VTC methods applied to a robot with shock absorbers may degrade. The contributions are two-fold: (1) Several experiments are conducted to exhibit the performance of the existing feature-engineering and feature-learning classification methods; and (2) According to the long short-term memory (LSTM) network, we propose a one-dimensional convolutional LSTM (1DCL)-based VTC method to learn both spatial and temporal characteristics of the dampened vibration signals. The experiment results demonstrate that: (1) The feature-engineering methods, which are efficient in VTC of the robot without shock absorbers, are not so accurate in our project; meanwhile, the feature-learning methods are better choices; and (2) The 1DCL-based VTC method outperforms the conventional methods with an accuracy of 80.18%, which exceeds the second method (LSTM) by 8.23%

    Challenges and solutions for autonomous ground robot scene understanding and navigation in unstructured outdoor environments: A review

    Get PDF
    The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges

    Robot swarming applications

    Get PDF
    This paper discusses the different modes of operation of a swarm of robots: (i) non-communicative swarming, (ii) communicative swarming, (iii) networking, (iv) olfactory-based navigation and (v) assistive swarming. I briefly present the state of the art in swarming and outline the major techniques applied for each mode of operation and discuss the related problems and expected results

    GANav: Group-wise Attention Network for Classifying Navigable Regions in Unstructured Outdoor Environments

    Full text link
    We present a new learning-based method for identifying safe and navigable regions in off-road terrains and unstructured environments from RGB images. Our approach consists of classifying groups of terrain classes based on their navigability levels using coarse-grained semantic segmentation. We propose a bottleneck transformer-based deep neural network architecture that uses a novel group-wise attention mechanism to distinguish between navigability levels of different terrains.Our group-wise attention heads enable the network to explicitly focus on the different groups and improve the accuracy. In addition, we propose a dynamic weighted cross entropy loss function to handle the long-tailed nature of the dataset. We show through extensive evaluations on the RUGD and RELLIS-3D datasets that our learning algorithm improves the accuracy of visual perception in off-road terrains for navigation. We compare our approach with prior work on these datasets and achieve an improvement over the state-of-the-art mIoU by 6.74-39.1% on RUGD and 3.82-10.64% on RELLIS-3D

    Méthode de fusion multi-capteurs pour des opérations de suivi de lignes de culture et de traversabilité

    Get PDF
    Conférence AXEMA-EURAGENG 2017, Paris, FRA, 25-/02/2017 - 25/02/2017International audiencePrecision agriculture vehicles need autonomous navigation in cultures to carry out their tasks, such as planting, maintenance and harvesting in cultures such as vegetable, vineyard, or horticulture. The detection of natural objects like trunks, grass, leaf, or obstacles in front of vehicle in crop row is crucial for safe navigation. Sensors such as LiDAR devices or Time Of Flight cameras (TOF), allow to obtain geometric data in natural environment, using information of an Inertial Measurement Unit (IMU), for measurement accuracy. Fusion of geometric information with a color camera data improves the natural object identification, using some color classification technique, such as Support Vector Machine (SVM), considering two object classes, either solid objects such as crop or tree branch, and other elements like grass, leaf and soil. Agricultural vehicles can use these geometric and colorimetric data in real time, to follow crop rows and detect obstacles while executing various precision agriculture operations. In this application, perception sensors embedded on a light mobile robot were used to detect and identify natural objects in agricultural crops, working in various fields, with or without soil perturbation, with different speeds and several vegetation levels to achieve crop row tracking tasks, from a desired lateral deviation between robot and crop line, or traversability operations which consisted to take a decision in vehicle navigation, according to the size and nature of the detected objects, in front of vehicle. The vehicle could cross or avoid the object, or it must stop, for big solid obstacles
    • 

    corecore