1,462 research outputs found

    Moving object detection for interception by a humanoid robot

    Get PDF
    Interception of a moving object with an autonomous robot is an important problem in robotics. It has various application areas, such as in an industrial setting where products on a conveyor would be picked up by a robotic arm, in the military to halt intruders, in robotic soccer (where the robots try to get to the moving ball and try to block an opponent\u27s attempt to pass the ball), and in other challenging situations. Interception, in and of itself, is a complex task that demands a system with target recognition capability, proper navigation and actuation toward the moving target. There are numerous techniques for intercepting stationary targets and targets that move along a certain trajectory (linear, circular, and parabolic). However, much less research has been done for objects moving with an unknown and unpredictable trajectory, changing scale as well and having a different view point, where, additionally, the reference frame of the robot vision system is also dynamic. This study aims to find methods for object detection and tracking using vision system applicable for autonomous interception of a moving humanoid robot target by another humanoid robot. With the use of the implemented vision system, a robot is able to detect, track and intercept in a dynamic environment the moving target, taking into account the unique specifications of a humanoid robot, such as the kinematics of walking. The vision system combined object detection based on Haar/LBP feature classifiers trained on Boosted Cascades\u27\u27 and target contour tracking using optical flow techniques. The constant updates during navigation helped to intercept the object moving with unpredicted trajectory

    Autonomous clothes manipulation using a hierarchical vision architecture

    Get PDF
    This paper presents a novel robot vision architecture for perceiving generic 3-D clothes configurations. Our architecture is hierarchically structured, starting from low-level curvature features to mid-level geometric shapes and topology descriptions, and finally, high-level semantic surface descriptions. We demonstrate our robot vision architecture in a customized dual-arm industrial robot with our inhouse developed stereo vision system, carrying out autonomous grasping and dual-arm flattening. The experimental results show the effectiveness of the proposed dual-arm flattening using the stereo vision system compared with the single-arm flattening using the widely cited Kinect-like sensor as the baseline. In addition, the proposed grasping approach achieves satisfactory performance when grasping various kind of garments, verifying the capability of the proposed visual perception architecture to be adapted to more than one clothing manipulation tasks

    Intelligent Vision-based Autonomous Ship Landing of VTOL UAVs

    Full text link
    The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of Vertical Take-Off and Landing (VTOL) capable Unmanned Aerial Vehicles (UAVs) on ships without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, refers to a standardized visual cue installed on most Navy ships called the "horizon bar" for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning-based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy

    Visual grasp point localization, classification and state recognition in robotic manipulation of cloth: an overview

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Cloth manipulation by robots is gaining popularity among researchers because of its relevance, mainly (but not only) in domestic and assistive robotics. The required science and technologies begin to be ripe for the challenges posed by the manipulation of soft materials, and many contributions have appeared in the last years. This survey provides a systematic review of existing techniques for the basic perceptual tasks of grasp point localization, state estimation and classification of cloth items, from the perspective of their manipulation by robots. This choice is grounded on the fact that any manipulative action requires to instruct the robot where to grasp, and most garment handling activities depend on the correct recognition of the type to which the particular cloth item belongs and its state. The high inter- and intraclass variability of garments, the continuous nature of the possible deformations of cloth and the evident difficulties in predicting their localization and extension on the garment piece are challenges that have encouraged the researchers to provide a plethora of methods to confront such problems, with some promising results. The present review constitutes for the first time an effort in furnishing a structured framework of these works, with the aim of helping future contributors to gain both insight and perspective on the subjectPeer ReviewedPostprint (author's final draft

    Robust Reinforcement Learning Algorithm for Vision-based Ship Landing of UAVs

    Full text link
    This paper addresses the problem of developing an algorithm for autonomous ship landing of vertical take-off and landing (VTOL) capable unmanned aerial vehicles (UAVs), using only a monocular camera in the UAV for tracking and localization. Ship landing is a challenging task due to the small landing space, six degrees of freedom ship deck motion, limited visual references for localization, and adversarial environmental conditions such as wind gusts. We first develop a computer vision algorithm which estimates the relative position of the UAV with respect to a horizon reference bar on the landing platform using the image stream from a monocular vision camera on the UAV. Our approach is motivated by the actual ship landing procedure followed by the Navy helicopter pilots in tracking the horizon reference bar as a visual cue. We then develop a robust reinforcement learning (RL) algorithm for controlling the UAV towards the landing platform even in the presence of adversarial environmental conditions such as wind gusts. We demonstrate the superior performance of our algorithm compared to a benchmark nonlinear PID control approach, both in the simulation experiments using the Gazebo environment and in the real-world setting using a Parrot ANAFI quad-rotor and sub-scale ship platform undergoing 6 degrees of freedom (DOF) deck motion

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Adaptive Sampling For Efficient Online Modelling

    Get PDF
    This thesis examines methods enabling autonomous systems to make active sampling and planning decisions in real time. Gaussian Process (GP) regression is chosen as a framework for its non-parametric approach allowing flexibility in unknown environments. The first part of the thesis focuses on depth constrained full coverage bathymetric surveys in unknown environments. Algorithms are developed to find and follow a depth contour, modelled with a GP, and produce a depth constrained boundary. An extension to the Boustrophedon Cellular Decomposition, Discrete Monotone Polygonal Partitioning is developed allowing efficient planning for coverage within this boundary. Efficient computational methods such as incremental Cholesky updates are implemented to allow online Hyper Parameter optimisation and fitting of the GP's. This is demonstrated in simulation and the field on a platform built for the purpose. The second part of this thesis focuses on modelling the surface salinity profiles of estuarine tidal fronts. The standard GP model assumes evenly distributed noise, which does not always hold. This can be handled with Heteroscedastic noise. An efficient new method, Parametric Heteroscedastic Gaussian Process regression, is proposed. This is applied to active sample selection on stationary fronts and adaptive planning on moving fronts where a number of information theoretic methods are compared. The use of a mean function is shown to increase the accuracy of predictions whilst reducing optimisation time. These algorithms are validated in simulation. Algorithmic development is focused on efficient methods allowing deployment on platforms with constrained computational resources. Whilst the application of this thesis is Autonomous Surface Vessels, it is hoped the issues discussed and solutions provided have relevance to other applications in robotics and wider fields such as spatial statistics and machine learning in general
    • …
    corecore