103 research outputs found

    Robotic Ball Catching with an Eye-in-Hand Single-Camera System

    Get PDF
    In this paper, a unified control framework is proposed to realize a robotic ball catching task with only a moving single-camera (eye-in-hand) system able to catch flying, rolling, and bouncing balls in the same formalism. The thrown ball is visually tracked through a circle detection algorithm. Once the ball is recognized, the camera is forced to follow a baseline in the space so as to acquire an initial dataset of visual measurements. A first estimate of the catching point is initially provided through a linear algorithm. Then, additional visual measurements are acquired to constantly refine the current estimate by exploiting a nonlinear optimization algorithm and a more accurate ballistic model. A classic partitioned visual servoing approach is employed to control the translational and rotational components of the camera differently. Experimental results performed on an industrial robotic system prove the effectiveness of the presented solution. A motion-capture system is employed to validate the proposed estimation process via ground truth

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    Carnegie Mellon Team Tartan: Mission-level Robustness with Rapidly Deployed Autonomous Aerial Vehicles in the MBZIRC 2020

    Full text link
    For robotics systems to be used in high risk, real-world situations, they have to be quickly deployable and robust to environmental changes, under-performing hardware, and mission subtask failures. Robots are often designed to consider a single sequence of mission events, with complex algorithms lowering individual subtask failure rates under some critical constraints. Our approach is to leverage common techniques in vision and control and encode robustness into mission structure through outcome monitoring and recovery strategies, aided by a system infrastructure that allows for quick mission deployments under tight time constraints and no central communication. We also detail lessons in rapid field robotics development and testing. Systems were developed and evaluated through real-robot experiments at an outdoor test site in Pittsburgh, Pennsylvania, USA, as well as in the 2020 Mohamed Bin Zayed International Robotics Challenge. All competition trials were completed in fully autonomous mode without RTK-GPS. Our system led to 4th place in Challenge 2 and 7th place in the Grand Challenge, and achievements like popping five balloons (Challenge 1), successfully picking and placing a block (Challenge 2), and dispensing the most water autonomously with a UAV of all teams onto an outdoor, real fire (Challenge 3).Comment: 28 pages, 26 figures. To appear in Field Robotics, Special Issues on MBZIRC 202

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    PAMPC: Perception-Aware Model Predictive Control for Quadrotors

    Full text link
    We present the first perception-aware model predictive control framework for quadrotors that unifies control and planning with respect to action and perception objectives. Our framework leverages numerical optimization to compute trajectories that satisfy the system dynamics and require control inputs within the limits of the platform. Simultaneously, it optimizes perception objectives for robust and reliable sens- ing by maximizing the visibility of a point of interest and minimizing its velocity in the image plane. Considering both perception and action objectives for motion planning and control is challenging due to the possible conflicts arising from their respective requirements. For example, for a quadrotor to track a reference trajectory, it needs to rotate to align its thrust with the direction of the desired acceleration. However, the perception objective might require to minimize such rotation to maximize the visibility of a point of interest. A model-based optimization framework, able to consider both perception and action objectives and couple them through the system dynamics, is therefore necessary. Our perception-aware model predictive control framework works in a receding-horizon fashion by iteratively solving a non-linear optimization problem. It is capable of running in real-time, fully onboard our lightweight, small-scale quadrotor using a low-power ARM computer, to- gether with a visual-inertial odometry pipeline. We validate our approach in experiments demonstrating (I) the contradiction between perception and action objectives, and (II) improved behavior in extremely challenging lighting conditions

    Study of Control Strategies for Robot Ball Catching

    Get PDF
    La tesi riguarda lo studio di un possibile scenario per la cattura di una palla con un braccio robotico usando tecnologie disponibili e considerando due problemi principali: studiare differenti strategie di controllo per il braccio robotico al fine di catturare la palla (controllo predittivo e prospettivo); implementare un simulatore in ROS che simula il robot reale, includendo un sistema di visione per riconoscere e tracciare la palla usando il sensore Microsoft Kinect, con diverse simulazion

    Visual perception for basketball shooting

    Get PDF
    Vision is one of the six sensory systems that we use to know and interact with our environment but has been singled out as the most important form of exteroception for motor control. The reason for this implicit upgrade is probably that many human actions are directed at objects or targets beyond our immediate physical contact. The only link between these objects and us is the pattern of light reflected from their surfaces, and yet we identify and act upon them with great ease. No doubt humans make significant strides in establishing appropriate relations between perceptions and actions at early stages of their development. When my nephew Rodrigo was three months old it took him considerable perseverance and a lot of jerky movements to finally grasp the toy my mother was patiently holding and rambling. But once the relations between perceptions and actions are better established, humans can be incredibly skilful at interacting with distant objects even when the constraints imposed on the interaction are severe and a high degree of precision is required. Like many other sportive tasks, basketball shooting is characterised by tight temporal constraints, limited spatial variation, and high accuracy demands. How basketball players manage to consistently throw a ball through the basket, even if severely challenged by their opponents, is a remarkable feat that has occupied scientists for years, and the present work is but another step in understanding the intricate relations between visual perception and action in such a context where few errors are allowed and few are made. The research reported in the present thesis was conducted to uncover the visual basis of basketball shooting. Basketball shooting consists of throwing a ball on a parabolic flight that passes through a metal rim twice the size of the ball at three metres height. Common shooting types are the free throw and the jump shot. Free throws are taken in less than 10 s from the 4.6 m line without opposition. Jump shots can be taken from anywhere in the field, usually in the presence of opponents, and imply that the ball is released while the player is airborne. Conventional knowledge stipulates that players must see the basket before they shoot. Straightforward as this statement may seem, it can be incorrect in two ways. First, it is not granted that vision is required before the shot, as opposed to during the shot. While vision gathered before the movement may be useful, it may also be insufficient or unnecessary for accurate shooting. This temporal aspect is relevant because it gives insight into the timely interaction between visual perception and action. Second, it is not certain that the player must actually see the basket, as opposed to merely looking at it. The location of the target may be perceived through various information sources, not necessarily retinal ones. This spatial aspect is relevant because it gives insight into the optical basis of goal-directed movement. In what follows we describe in more detail what these temporal and spatial aspects of visual perception andaction consist of, backed up with relevant literature. Next, we briefly review the available literature on the visual perception of basketball shooting and introduce six experiments in which the temporal and spatial aspects of basketball shooting are investigated

    Importance of embodiment towards co-operation in multi robot systems

    Get PDF
    The work presented in this thesis relates to one of the major ongoing problems in robotics: Developing control architectures for cooperation in Multi Robot Systems (MRS). It has been widely accepted that Embodiment is a prime requirement for Robotics. However, in the case of MRS research, two major shortfalls were identified. First, it was highlighted that no effort had been made into research platforms for Embodied MRS. Second, it was also observed that, generally, the more units in an MRS the lower their capabilities and as a result the poorer their degree of embodiment. These two issues were addressed separately. Firstly, a novel concept for MRS development platform named 'Re-embodiment' is presented. Re-embodiment aims to facilitate research on control systems for MRS by minimising the effort required to ensure that the robots remain embodied and situated. Using Re-embodiment, researchers can implement and test largely different control algorithms at virtually the same time on large fleets of robots. Secondly, an innovative mono vision distance measurement algorithm is presented. The intention is to provide a cheap, yet information rich, sensory input that can be realistically implemented on large fleet of robots. After a 'one off calibration of the image sensor, distances from the robot to objects in its environment can be estimated from single frames.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore