296 research outputs found

    Aspects of the Rover Problem

    Get PDF
    The basic task of a rover is to move about automonously in an unknown environment. A working rover must have the following three subsystems which interact in various ways: 1) locomotion--the ability to move, 2) perception--the ability to determine the three-dimensional structure of the environment, and 3) navigation--the ability to negotiate the environment. This paper will elucidate the nature of the problem in these areas and survey approaches to solving them while paying attention to real-world issues.MIT Artificial Intelligence Laborator

    Performance Investigations of an Improved Backstepping Operational space Position Tracking Control of a Mobile Manipulator

    Get PDF
    This article implies an improved backstepping control technique for the operational-space position tracking of a kinematically redundant mobile manipulator. The mobile manipulator thought-out for the analysis has a vehicle base with four mecanum wheels and a serial manipulator arm with three rotary actuated joints. The recommended motion controller provides a safeguard against the system dynamic variations owing to the parameter uncertainties, unmodelled system dynamics and unknown exterior disturbances. The Lyapunov’s direct method assists in designing and authenticating the system’s closed-loop stability and tracking ability of the suggested control strategy. The feasibility, effectiveness and robustness of the recommended controller are demonstrated and investigated numerically with the help of computer based simulations. The mathematical model used for the computer-based simulations is derived based on a real-time mobile manipulator and the derived model is further verified with an inbuilt gazebo model in a robot operating system (ROS) environment. In addition, the proposed scheme is verified on an in-house fabricated mobile manipulator system. Further, the recommended controller performance is correlated with the conventional backstepping control design in both computer-based simulations and in real-time experiments

    Autonomous navigation in unstructured environments using an arm-mounted camera for target localization

    Get PDF
    openAutonomous mobile robots became over the recent years a popular topic of research mainly for their capacity to perform tasks in complete autonomy without the constant intervention of an human operator. In this context, autonomous navigation represents one of the main studied branch of autonomous robotics. Autonomous navigation in both structured and unstructured environments have been widely researched over years, with the development of several techniques that tries to solve this problem. In this context, there are several components that are required to get the proper solution to the navigation problem, and one of these is represented by the knowledge of the final position that an autonomous robot has to reach inside an environment. In this thesis, the goal is to enhance the autonomous capabilities of a robot by making it able to detect and follow constantly a target placed inside an unstructured environment. This result is obtained using a camera installed as end-effector of a robotic arm, which in turn is installed on top of a mobile robot. All the methodologies as well as the tools that have been used in the development of this project are presented in this thesis. The evaluation of the performances of the algorithm are performed both in a static context, where the robot is fixed and the target is free to move, and in a dynamic context where the robot moves and the target is fixed. The motion of the robot is obtained using an innovative algorithm for navigation in unstructured environments, NAPVIG. The proposed approach has been implemented using ROS and been tested both in a simulated environment using Gazebo as well as in a real world scenario. The results obtained from both type of experiments will be presented and discussed.Autonomous mobile robots became over the recent years a popular topic of research mainly for their capacity to perform tasks in complete autonomy without the constant intervention of an human operator. In this context, autonomous navigation represents one of the main studied branch of autonomous robotics. Autonomous navigation in both structured and unstructured environments have been widely researched over years, with the development of several techniques that tries to solve this problem. In this context, there are several components that are required to get the proper solution to the navigation problem, and one of these is represented by the knowledge of the final position that an autonomous robot has to reach inside an environment. In this thesis, the goal is to enhance the autonomous capabilities of a robot by making it able to detect and follow constantly a target placed inside an unstructured environment. This result is obtained using a camera installed as end-effector of a robotic arm, which in turn is installed on top of a mobile robot. All the methodologies as well as the tools that have been used in the development of this project are presented in this thesis. The evaluation of the performances of the algorithm are performed both in a static context, where the robot is fixed and the target is free to move, and in a dynamic context where the robot moves and the target is fixed. The motion of the robot is obtained using an innovative algorithm for navigation in unstructured environments, NAPVIG. The proposed approach has been implemented using ROS and been tested both in a simulated environment using Gazebo as well as in a real world scenario. The results obtained from both type of experiments will be presented and discussed

    Extreme Parkour with Legged Robots

    Full text link
    Humans can perform parkour by traversing obstacles in a highly dynamic fashion requiring precise eye-muscle coordination and movement. Getting robots to do the same task requires overcoming similar challenges. Classically, this is done by independently engineering perception, actuation, and control systems to very low tolerances. This restricts them to tightly controlled settings such as a predetermined obstacle course in labs. In contrast, humans are able to learn parkour through practice without significantly changing their underlying biology. In this paper, we take a similar approach to developing robot parkour on a small low-cost robot with imprecise actuation and a single front-facing depth camera for perception which is low-frequency, jittery, and prone to artifacts. We show how a single neural net policy operating directly from a camera image, trained in simulation with large-scale RL, can overcome imprecise sensing and actuation to output highly precise control behavior end-to-end. We show our robot can perform a high jump on obstacles 2x its height, long jump across gaps 2x its length, do a handstand and run across tilted ramps, and generalize to novel obstacle courses with different physical properties. Parkour videos at https://extreme-parkour.github.io/Comment: Website and videos at https://extreme-parkour.github.io
    • …
    corecore