75 research outputs found

    An Active Visual Estimator for Dexterous Manipulation

    Get PDF
    We present a working implementation of a dynamics based architecture for visual sensing. This architecture provides field rate estimates of the positions and velocities of two independent falling balls in the face of repeated visual occlusions and departures from the field of view. The practical success of this system can be attributed to the interconnection of two strongly nonlinear dynamical systems: a novel triangulating state estimator; and an image plane window controller. We detail the architecture of this active sensor, provide data documenting its performance, and offer an analysis of its soundness in the form of a convergence proof for the estimator and a boundedness proof for the manager

    An active visual estimator for dexterous manipulation

    Full text link

    Dynamic Bat-Control of a Redundant Ball Playing Robot

    Get PDF
    This thesis shows a control algorithm for coping with a ball batting task for an entertainment robot. The robot is a three jointed robot with a redundant degree of freedom and its name is Doggy . Doggy because of its dog-like costume. Design, mechanics and electronics were developed by us. DC-motors control the tooth belt driven joints, resulting in elasticities between the motor and link. Redundancy and elasticity have to be taken into account by our developed controller and are demanding control tasks. In this thesis we show the structure of the ball playing robot and how this structure can be described as a model. We distinguish two models: One model that includes a flexible bearing, the other does not. Both models are calibrated using the toolkit Sparse Least Squares on Manifolds (SLOM) - i.e. the parameters for the model are determined. Both calibrated models are compared to measurements of the real system. The model with the flexible bearing is used to implement a state estimator - based on a Kalman filter - on a microcontroller. This ensures real time estimation of the robot states. The estimated states are also compared with the measurements and are assessed. The estimated states represent the measurements well. In the core of this work we develop a Task Level Optimal Controller (TLOC), a model-predictive optimal controller based on the principles of a Linear Quadratic Regulator (LQR). We aim to play a ball back to an opponent precisely. We show how this task of playing a ball at a desired time with a desired velocity at a desired position can be embedded into the LQR principle. We use cost functions for the task description. In simulations, we show the functionality of the control concept, which consists of a linear part (on a microcontroller) and a nonlinear part (PC software). The linear part uses feedback gains which are calculated by the nonlinear part. The concept of the ball batting controller with precalculated feedback gains is evaluated on the robot. This shows successful batting motions. The entertainment aspect has been tested on the Open Campus Day at the University of Bremen and is summarized here shortly. Likewise, a jointly developed audience interaction by recognition of distinctive sounds is summarized herein. In this thesis we answer the question, if it is possible to define a rebound task for our robot within a controller and show the necessary steps for this

    Robotic Ball Catching with an Eye-in-Hand Single-Camera System

    Get PDF
    In this paper, a unified control framework is proposed to realize a robotic ball catching task with only a moving single-camera (eye-in-hand) system able to catch flying, rolling, and bouncing balls in the same formalism. The thrown ball is visually tracked through a circle detection algorithm. Once the ball is recognized, the camera is forced to follow a baseline in the space so as to acquire an initial dataset of visual measurements. A first estimate of the catching point is initially provided through a linear algorithm. Then, additional visual measurements are acquired to constantly refine the current estimate by exploiting a nonlinear optimization algorithm and a more accurate ballistic model. A classic partitioned visual servoing approach is employed to control the translational and rotational components of the camera differently. Experimental results performed on an industrial robotic system prove the effectiveness of the presented solution. A motion-capture system is employed to validate the proposed estimation process via ground truth

    Visual Human-Computer Interaction

    Get PDF

    Direct Visual Servoing for Grasping Using Depth Maps

    Get PDF
    Visual servoing is extremely helpful for many applications such as tracking objects, controlling the position of end-effectors, grasping and many others. It has been helpful in industrial sites, academic projects and research. Visual servoing is a very challenging task in robotics and research has been done in order to address and improve the methods used for servoing and the grasping application in particular. Our goal is to use visual servoing to control the end-effector of a robotic arm bringing it to a grasping position for the object of interest. Gaining knowledge about depth was always a major challenge for visual servoing, yet necessary. Depth knowledge was either assumed to be available from a 3D model or was estimated using stereo vision or other methods. This process is computationally expensive and the results might be inaccurate because of its sensitivity to environmental conditions. Depth map usage has been recently more commonly used by researchers as it is an easy, fast and cheap way to capture depth information. This solved the problems faced estimating the 3-D information needed but the developed algorithms were only successful starting from small initial errors. An effective position controller capable of reaching the target location starting from large initial errors is needed. The thesis presented here uses Kinect depth maps to directly control a robotic arm to reach a determined grasping location specified by a target image. The algorithm consists of a 2-phase controller; the first phase is a feature based approach that provides a coarse alignment with the target image resulting in relatively small errors. The second phase is a depth map error minimization based control. The second-phase controller minimizes the difference in depth maps between the current and target images. This controller allows the system to achieve minimal steady state errors in translation and rotation starting from a relatively small initial error. To test the system's effectiveness, several experiments were conducted. The experimental setup consists of the Barrett WAM robotic arm with a Microsoft Kinect camera mounted on it in an eye-in-hand configuration. A defined goal scene taken from the grasping position is inputted to the system whose controller drives it to the target position starting from any initial condition. Our system outperforms previous work which tackled this subject. It functions successfully even with large initial errors. This successful operation is achieved by preceding the main control algorithm with a coarse image alignment achieved via a feature based control. Automating the system further by automatically detecting the best grasping position and making that location the robot's target would be a logical extension to improve and complete this work

    Visual Servoing For Robotic Positioning And Tracking Systems

    Get PDF
    Visual servoing is a robot control method in which camera sensors are used inside the control loop and visual feedback is introduced into the robot control loop to enhance the robot control performance in accomplishing tasks in unstructured environments. In general, visual servoing can be categorized into image-based visual servoing (IBVS), position-based visual servoing (PBVS), and hybrid approach. To improve the performance and robustness of visual servoing systems, the research on IBVS for robotic positioning and tracking systems mainly focuses on aspects of camera configuration, image features, pose estimation, and depth determination. In the first part of this research, two novel multiple camera configurations of visual servoing systems are proposed for robotic manufacturing systems for positioning large-scale workpieces. The main advantage of these two multiple camera configurations is that the depths of target objects or target features are constant or can be determined precisely by using computer vision. Hence the accuracy of the interaction matrix is guaranteed, and thus the positioning performances of visual servoing systems can be improved remarkably. The simulation results show that the proposed multiple camera configurations of visual servoing for large-scale manufacturing systems can satisfy the demand of high-precision positioning and assembly in the aerospace industry. In the second part of this research, two improved image features for planar central symmetrical-shaped objects are proposed based on image moment invariants, which can represent the pose of target objects with respect to camera frame. A visual servoing controller based on the proposed image moment features is designed and thus the control performance of the robotic tracking system is improved compared with the method based on the commonly used image moment features. Experimental results on a 6-DOF robot visual servoing system demonstrate the efficiency of the proposed method. Lastly, to address the challenge of choosing proper image features for planar objects to get maximal decoupled structure of the interaction matrix, the neural network (NN) is applied as the estimator of target object poses with respect to camera frame based on the image moment invariants. Compared with previous methods, this scheme avoids image interaction matrix singularity and image local minima in IBVS. Furthermore, the analytical form of depth computation is given by using classical geometrical primitives and image moment invariants. A visual servoing controller is designed and the tracking performance is enhanced for robotic tracking systems. Experimental results on a 6-DOF robot system are provided to illustrate the effectiveness of the proposed scheme

    New editing techniques for video post-processing

    Get PDF
    This thesis contributes to capturing 3D cloth shape, editing cloth texture and altering object shape and motion in multi-camera and monocular video recordings. We propose a technique to capture cloth shape from a 3D scene flow by determining optical flow in several camera views. Together with a silhouette matching constraint we can track and reconstruct cloth surfaces in long video sequences. In the area of garment motion capture, we present a system to reconstruct time-coherent triangle meshes from multi-view video recordings. Texture mapping of the acquired triangle meshes is used to replace the recorded texture with new cloth patterns. We extend this work to the more challenging single camera view case. Extracting texture deformation and shading effects simultaneously enables us to achieve texture replacement effects for garments in monocular video recordings. Finally, we propose a system for the keyframe editing of video objects. A color-based segmentation algorithm together with automatic video inpainting for filling in missing background texture allows us to edit the shape and motion of 2D video objects. We present examples for altering object trajectories, applying non-rigid deformation and simulating camera motion.In dieser Dissertation stellen wir Beiträge zur 3D-Rekonstruktion von Stoffoberfächen, zum Editieren von Stofftexturen und zum Editieren von Form und Bewegung von Videoobjekten in Multikamera- und Einkamera-Aufnahmen vor. Wir beschreiben eine Methode für die 3D-Rekonstruktion von Stoffoberflächen, die auf der Bestimmung des optischen Fluß in mehreren Kameraansichten basiert. In Kombination mit einem Abgleich der Objektsilhouetten im Video und in der Rekonstruktion erhalten wir Rekonstruktionsergebnisse für längere Videosequenzen. Für die Rekonstruktion von Kleidungsstücken beschreiben wir ein System, das zeitlich kohärente Dreiecksnetze aus Multikamera-Aufnahmen rekonstruiert. Mittels Texturemapping der erhaltenen Dreiecksnetze wird die Stofftextur in der Aufnahme mit neuen Texturen ersetzt. Wir setzen diese Arbeit fort, indem wir den anspruchsvolleren Fall mit nur einer einzelnen Videokamera betrachten. Um realistische Resultate beim Ersetzen der Textur zu erzielen, werden sowohl Texturdeformationen durch zugrundeliegende Deformation der Oberfläche als auch Beleuchtungseffekte berücksichtigt. Im letzten Teil der Dissertation stellen wir ein System zum Editieren von Videoobjekten mittels Keyframes vor. Dies wird durch eine Kombination eines farbbasierten Segmentierungsalgorithmus mit automatischem Auffüllen des Hintergrunds erreicht, wodurch Form und Bewegung von 2D-Videoobjekten editiert werden können. Wir zeigen Beispiele für editierte Objekttrajektorien, beliebige Deformationen und simulierte Kamerabewegung

    The mechanics of continuum robots: model-based sensing and control

    Get PDF

    Tangible auditory interfaces : combining auditory displays and tangible interfaces

    Get PDF
    Bovermann T. Tangible auditory interfaces : combining auditory displays and tangible interfaces. Bielefeld (Germany): Bielefeld University; 2009.Tangible Auditory Interfaces (TAIs) investigates into the capabilities of the interconnection of Tangible User Interfaces and Auditory Displays. TAIs utilise artificial physical objects as well as soundscapes to represent digital information. The interconnection of the two fields establishes a tight coupling between information and operation that is based on the human's familiarity with the incorporated interrelations. This work gives a formal introduction to TAIs and shows their key features at hand of seven proof of concept applications
    • …
    corecore