720 research outputs found

    GPU Based Path Integral Control with Learned Dynamics

    Full text link
    We present an algorithm which combines recent advances in model based path integral control with machine learning approaches to learning forward dynamics models. We take advantage of the parallel computing power of a GPU to quickly take a massive number of samples from a learned probabilistic dynamics model, which we use to approximate the path integral form of the optimal control. The resulting algorithm runs in a receding-horizon fashion in realtime, and is subject to no restrictive assumptions about costs, constraints, or dynamics. A simple change to the path integral control formulation allows the algorithm to take model uncertainty into account during planning, and we demonstrate its performance on a quadrotor navigation task. In addition to this novel adaptation of path integral control, this is the first time that a receding-horizon implementation of iterative path integral control has been run on a real system.Comment: 6 pages, NIPS 2014 - Autonomously Learning Robots Worksho

    Mixed marker-based/marker-less visual odometry system for mobile robots

    Get PDF
    When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test

    Aerial Tele-Manipulation with Passive Tool via Parallel Position/Force Control

    Get PDF
    This paper addresses the problem of unilateral contact interaction by an under-actuated quadrotor UAV equipped with a passive tool in a bilateral teleoperation scheme. To solve the challenging control problem of force regulation in contact interaction while maintaining flight stability and keeping the contact, we use a parallel position/force control method, commensurate to the system dynamics and constraints in which using the compliant structure of the end-effector the rotational degrees of freedom are also utilized to attain a broader range of feasible forces. In a bilateral teleoperation framework, the proposed control method regulates the aerial manipulator position in free flight and the applied force in contact interaction. On the master side, the human operator is provided with force haptic feedback to enhance his/her situational awareness. The validity of the theory and efficacy of the solution are shown by experimental results. This control architecture, integrated with a suitable perception/localization pipeline, could be used to perform outdoor aerial teleoperation tasks in hazardous and/or remote sites of interest

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems
    corecore