3 research outputs found

    Robotic Ball Catching with an Eye-in-Hand Single-Camera System

    Get PDF
    In this paper, a unified control framework is proposed to realize a robotic ball catching task with only a moving single-camera (eye-in-hand) system able to catch flying, rolling, and bouncing balls in the same formalism. The thrown ball is visually tracked through a circle detection algorithm. Once the ball is recognized, the camera is forced to follow a baseline in the space so as to acquire an initial dataset of visual measurements. A first estimate of the catching point is initially provided through a linear algorithm. Then, additional visual measurements are acquired to constantly refine the current estimate by exploiting a nonlinear optimization algorithm and a more accurate ballistic model. A classic partitioned visual servoing approach is employed to control the translational and rotational components of the camera differently. Experimental results performed on an industrial robotic system prove the effectiveness of the presented solution. A motion-capture system is employed to validate the proposed estimation process via ground truth

    Reactive Motions In A Fully Autonomous CRS Catalyst 5 Robotic Arm Based On RGBD Data

    Get PDF
    This study proposes a method to perform velocity estimation using motion blur in a single image frame along x and y axes in the camera coordinate system and intercept a moving object with a robotic arm. It will be shown that velocity estimation in a single image frame improves the system\u27s performance. The majority of previous studies in this area require at least two image frames to measure the target\u27s velocity. In addition, they mostly employ specialized equipments which are able to generate high torques and accelerations. The setup consists of a 5 degree of freedom robotic arm and a Kinect camera. The RGBD (Red, Green, Blue and Depth) camera provides the RGB and depth information which are used to detect the position of the target. As the object is moving within a single image frame, the image contains motion blur. To recognize and differentiate the object from blurred area, the image intensity profiles are studied. Accordingly, the method determines the blur parameters based on the changes in the intensity profile. The aforementioned blur parameters are the length of the object and the length of the partial blur. Based on motion blur, the velocities along x and y camera coordinate axes are estimated. However, as the depth frame cannot record motion blur, the velocity along axis in the camera coordinate frame is initially unknown. The vectors of position and velocity are transformed into world coordinate frame and subsequently, the prospective position of the object, after a predefined time interval, is predicted. In order to intercept, the end-effector of the robotic arm must reach this predicted position within the time interval as well. For the end-effector to reach the predicted position within the predefined time interval, the robot\u27s joint angles and accelerations are determined through inverse kinematic methods. Then the robotic arm starts its motion. Once the second depth frame is obtained, the object\u27s velocity along z axis can be calculated as well. Accordingly, the predicted position of the object is recalculated, and the motion of the manipulator is modified. The proposed method is compared with existing methods which need at least two image frames to estimate the velocity of the target. It is shown that under identical kinematic conditions, the functionality of the system is improved by times for our setup. In addition, the experiment is repeated for times and the velocity data is recorded. According to the experimental results, there are two major limitations in our system and setup. The system cannot determine the velocity along z in the camera coordinate system from the initial image frame. Consequently, if the object travels faster along this axis, it becomes more susceptible to failure. In addition, our manipulator is an unspecialized equipment which is not designed for producing high torques and accelerations. Accordingly, the task becomes more challenging. The main cause of error in the experiments was operator\u27s. It is necessary to have the object pass through the working volume of the robot. Besides, the object must be still inside the working volume after the predefined time interval. It is possible that the operator throw the object within the designated working volume, but it leaves it earlier than the specified time interval

    3D monocular robotic ball catching with an iterative trajectory estimation refinement

    No full text
    In this paper, a 3D robotic ball catching algorithm which employs only an eye-in-hand monocular visual-system is presented. A partitioned visual servoing control is used in order to generate the robot motion, keeping always the ball in the field of view of the camera. When the ball is detected, the camera mounted on the robot end-effector is commanded to follow a suitable baseline in order to acquire measurements and provide a first possible interception point through a linear estimation process. Thereafter, further visual measures are acquired in order to continuously refine the previous prediction through a non-linear estimation process. Experimental results show the effectiveness of the proposed solution
    corecore