45,159 research outputs found

    Vision-Based Control of the Robotenis System

    Get PDF
    In this paper a visual servoing architecture based on a parallel robot for the tracking of faster moving objects with unknown trajectories is proposed. The control strategy is based on the prediction of the future position and velocity of the moving object. The synthesis of the predictive control law is based on the compensation of the delay introduced by the vision system. Demonstrating by experiments, the high-speed parallel robot system has good performance in the implementation of visual control strategies with high temporary requirement

    Aspects of an open architecture robot controller and its integration with a stereo vision sensor.

    Get PDF
    The work presented in this thesis attempts to improve the performance of industrial robot systems in a flexible manufacturing environment by addressing a number of issues related to external sensory feedback and sensor integration, robot kinematic positioning accuracy, and robot dynamic control performance. To provide a powerful control algorithm environment and the support for external sensor integration, a transputer based open architecture robot controller is developed. It features high computational power, user accessibility at various robot control levels and external sensor integration capability. Additionally, an on-line trajectory adaptation scheme is devised and implemented in the open architecture robot controller, enabling a real-time trajectory alteration of robot motion to be achieved in response to external sensory feedback. An in depth discussion is presented on integrating a stereo vision sensor with the robot controller to perform external sensor guided robot operations. Key issues for such a vision based robot system are precise synchronisation between the vision system and the robot controller, and correct target position prediction to counteract the inherent time delay in image processing. These were successfully addressed in a demonstrator system based on a Puma robot. Efforts have also been made to improve the Puma robot kinematic and dynamic performance. A simple, effective, on-line algorithm is developed for solving the inverse kinematics problem of a calibrated industrial robot to improve robot positioning accuracy. On the dynamic control aspect, a robust adaptive robot tracking control algorithm is derived that has an improved performance compared to a conventional PID controller as well as exhibiting relatively modest computational complexity. Experiments have been carried out to validate the open architecture robot controller and demonstrate the performance of the inverse kinematics algorithm, the adaptive servo control algorithm, and the on-line trajectory generation. By integrating the open architecture robot controller with a stereo vision sensor system, robot visual guidance has been achieved with experimental results showing that the integrated system is capable of detecting, tracking and intercepting random objects moving in 3D trajectory at a velocity up to 40mm/s

    Hidden Markov Model for Visual Guidance of Robot Motion in Dynamic Environment

    Get PDF
    Models and control strategies for dynamic obstacle avoidance in visual guidance of mobile robot are presented. Characteristics that distinguish the visual computation and motion-control requirements in dynamic environments from that in static environments are discussed. Objectives of the vision and motion planning are formulated as: 1) finding a collision-free trajectory that takes account of any possible motions of obstacles in the local environment; 2) such a trajectory should be consistent with a global goal or plan of the motion; and 3) the robot should move at as high a speed as possible, subject to its kinematic constraints. A stochastic motion-control algorithm based on a hidden Markov model (HMM) is developed. Obstacle motion prediction applies a probabilistic evaluation scheme. Motion planning of the robot implements a trajectory-guided parallel-search strategy in accordance with the obstacle motion prediction models. The approach simplifies the control process of robot motion

    KALMAN FILTER AND NARX NEURAL NETWORK FOR ROBOT VISION BASED HUMAN TRACKING

    Get PDF
    Tracking human is an important and challenging problem in video-based intelligent robot systems. In this paper, a vision-based human tracking system is supposed to provide sensor input for vision-based control of a mobile robot that works in a team helping the human co-worker. A comparison between NARX neural network and Kalman filter in solving the prediction problem of human tracking in robot vision is presented. After collecting video data from a robot, simulation results obtained from the Kalman filter model are used to compare with the simulation results obtained from the NARX Neural network.Key words: robot vision, Kalman filter, neural networks, human trackin

    An NMPC-ECBF Framework for Dynamic Motion Planning and Execution in vision-based Human-Robot Collaboration

    Full text link
    To enable safe and effective human-robot collaboration (HRC) in smart manufacturing, seamless integration of sensing, cognition, and prediction into the robot controller is critical for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes advantage of the prediction capabilities of nonlinear model predictive control (NMPC) to execute a safe path planning based on feedback from a vision system. In order to satisfy the requirement of real-time path planning, an embedded solver based on a penalty method is applied. However, due to tight sampling times NMPC solutions are approximate, and hence the safety of the system cannot be guaranteed. To address this we formulate a novel safety-critical paradigm with an exponential control barrier function (ECBF) used as a safety filter. We also design a simple human-robot collaboration scenario using V-REP to evaluate the performance of the proposed controller and investigate whether integrating human pose prediction can help with safe and efficient collaboration. The robot uses OptiTrack cameras for perception and dynamically generates collision-free trajectories to the predicted target interactive position. Results for a number of different configurations confirm the efficiency of the proposed motion planning and execution framework. It yields a 19.8% reduction in execution time for the HRC task considered

    Modelbased Visual Servoing Grasping of Objects Moving by Newtonian Dynamics

    Get PDF
    Robot control systems are traditionally closed system. With the aid of vision, visual feedback is used to guide the robot manipulator to the target in a similar manner as humans do. This hand-to-target task is fairly easy if the target is static in Cartesian space. However, if the target is dynamics in motion, a model of this dynamical behaviour is required in order for the robot to predict or track the target trajectory and intercept the target successfully. One the necessary modeling is done, the framework becomes one of automatic control. >p In this master thesis, we present a model-based visual servoing of a six degree-of-freedom (DOF) industrial robot in the manner of computer simulation. The objective of this thesis is to manoeuvre the robot to grasp a ball moving by Newtonian dynamics in an unattended and less structured three-dimensional environment. >p Two digital cameras are used cooperatively to capture images of the ball for computer vision system to generate qualitative visual information. The accuracy of the visual information is essential to the robotic servoing control. The computer vision system detects the ball in image space, segments the ball from the background and computes the ball in image space as visual information. The visual information is used for 3D reconstruction of the ball in Cartesian space. The trajectory of the thrown ball is then modeled and predicted. Several ball grasp positions in Cartesian space are predicted as the thrown ball travelling towards the robot. At that same time, the inverse kinematics of the robot is also computed and it steers the robot to track the predicted ball grasp positions and grasp the ball when the error is small. In addition, the performance and robustness of this model-based prediction of the ball trajectory is verified with graphical analysis

    Action-conditioned Deep Visual Prediction with RoAM, a new Indoor Human Motion Dataset for Autonomous Robots

    Full text link
    With the increasing adoption of robots across industries, it is crucial to focus on developing advanced algorithms that enable robots to anticipate, comprehend, and plan their actions effectively in collaboration with humans. We introduce the Robot Autonomous Motion (RoAM) video dataset, which is collected with a custom-made turtlebot3 Burger robot in a variety of indoor environments recording various human motions from the robot's ego-vision. The dataset also includes synchronized records of the LiDAR scan and all control actions taken by the robot as it navigates around static and moving human agents. The unique dataset provides an opportunity to develop and benchmark new visual prediction frameworks that can predict future image frames based on the action taken by the recording agent in partially observable scenarios or cases where the imaging sensor is mounted on a moving platform. We have benchmarked the dataset on our novel deep visual prediction framework called ACPNet where the approximated future image frames are also conditioned on action taken by the robot and demonstrated its potential for incorporating robot dynamics into the video prediction paradigm for mobile robotics and autonomous navigation research

    Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression

    Full text link
    We design a new approach that allows robot learning of new activities from unlabeled human example videos. Given videos of humans executing the same activity from a human's viewpoint (i.e., first-person videos), our objective is to make the robot learn the temporal structure of the activity as its future regression network, and learn to transfer such model for its own motor execution. We present a new deep learning model: We extend the state-of-the-art convolutional object detection network for the representation/estimation of human hands in training videos, and newly introduce the concept of using a fully convolutional network to regress (i.e., predict) the intermediate scene representation corresponding to the future frame (e.g., 1-2 seconds later). Combining these allows direct prediction of future locations of human hands and objects, which enables the robot to infer the motor control plan using our manipulation network. We experimentally confirm that our approach makes learning of robot activities from unlabeled human interaction videos possible, and demonstrate that our robot is able to execute the learned collaborative activities in real-time directly based on its camera input
    corecore