68,839 research outputs found

    Hidden Markov Model for Visual Guidance of Robot Motion in Dynamic Environment

    Get PDF
    Models and control strategies for dynamic obstacle avoidance in visual guidance of mobile robot are presented. Characteristics that distinguish the visual computation and motion-control requirements in dynamic environments from that in static environments are discussed. Objectives of the vision and motion planning are formulated as: 1) finding a collision-free trajectory that takes account of any possible motions of obstacles in the local environment; 2) such a trajectory should be consistent with a global goal or plan of the motion; and 3) the robot should move at as high a speed as possible, subject to its kinematic constraints. A stochastic motion-control algorithm based on a hidden Markov model (HMM) is developed. Obstacle motion prediction applies a probabilistic evaluation scheme. Motion planning of the robot implements a trajectory-guided parallel-search strategy in accordance with the obstacle motion prediction models. The approach simplifies the control process of robot motion

    Implementation of Vision Based Robot Navigation System in Dynamic Environment

    Get PDF
    In this paper the implementation of robot navigation in the dynamic environment using vision based approach is proposed. Vision based robot navigation has been a fundamental goal in both robotics and computer vision research. In the visual guidelines based navigation system, the motion instructions required to control the robot can be inferred directly from the acquired images. In this work, the algorithm is designed for an intelligent robot which is placed in an unknown environment. The robot detects the signs from a captured images using features based extraction and moves according to the signs. Also, it is able to tackle an encountered obstacle in its way. The robot successfully detects different signs like right, left and stop from an image. DOI: 10.17762/ijritcc2321-8169.15065

    Visual servoing with nonlinear observer

    Get PDF
    Visual servo system is a robot control system which incorporates the vision sensor in the feedback loop. Since the robot controller is also in the visual servo loop, compensation of the robot dynamics is important for high speed tasks. Moreover estimation of the object motion is necessary for real time tracking because the visual information includes considerable delay. This paper proposes a nonlinear model-based controller and a nonlinear observer for visual servoing. The observer estimates the object motion and the nonlinear controller makes the closed loop system asymptotically stable based on the estimated object motion. The effectiveness of the observer-based controller is verified by simulations and experiments on a two link planar direct drive robot</p

    Use of 3D vision for fine robot motion

    Get PDF
    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems

    Comparative Study of Computer Vision Based Line Followers Using Raspberry Pi and Jetson Nano

    Get PDF
    The line follower robot is a mobile robot which can navigate and traverse to another place by following a trajectory which is generally in the form of black or white lines. This robot can also assist human in carrying out transportation and industrial automation. However, this robot also has several challenges with regard to the calibration issue, incompatibility on wavy surfaces, and also the light sensor placement due to the line width variation. Robot vision utilizes image processing and computer vision technology for recognizing objects and controlling the robot motion. This study discusses the implementation of vision based line follower robot using a camera as the only sensor used to capture objects. A comparison of robot performance employing different CPU controllers, namely Raspberry Pi and Jetson Nano, is made. The image processing uses an edge detection method which detect the border to discriminate two image areas and mark different parts. This method aims to enable the robot to control its motion based on the object captured by the webcam. The results show that the accuracies of the robot employing the Raspberry Pi and Jetson Nano are 96% and 98%, respectively

    Motion Planning from Demonstrations and Polynomial Optimization for Visual Servoing Applications

    Get PDF
    Vision feedback control techniques are desirable for a wide range of robotics applications due to their robustness to image noise and modeling errors. However in the case of a robot-mounted camera, they encounter difficulties when the camera traverses large displacements. This scenario necessitates continuous visual target feedback during the robot motion, while simultaneously considering the robot's self- and external-constraints. Herein, we propose to combine workspace (Cartesian space) path-planning with robot teach-by-demonstration to address the visibility constraint, joint limits and “whole arm” collision avoidance for vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential “whole arm” collisions. Our algorithm uses these safe regions to generate new feasible trajectories under a visibility constraint that achieves the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. Experiments with a 7-DOF articulated arm validate the proposed method.published_or_final_versio

    Navigation system for a mobile robot incorporating trinocular vision for range imaging

    Full text link
    This research focuses on the development of software for the navigation of a mobile robot. The software developed to control the robot uses sensory data obtained from ultra sound, infra red and tactile sensors, along with depth maps using trinocular vision. Robot navigation programs were written to navigate the robot and were tested in a simulated environment as well as the real world. Data from the various sensors was read and successfully utilized in the control of the robot motion. Software was developed to obtain the range and bearing of the closest obstacle in sight using the trinocular vision system. An operator supervised navigation system was also developed that enabled the navigation of the robot based on the inference from the camera images

    Aspects of an open architecture robot controller and its integration with a stereo vision sensor.

    Get PDF
    The work presented in this thesis attempts to improve the performance of industrial robot systems in a flexible manufacturing environment by addressing a number of issues related to external sensory feedback and sensor integration, robot kinematic positioning accuracy, and robot dynamic control performance. To provide a powerful control algorithm environment and the support for external sensor integration, a transputer based open architecture robot controller is developed. It features high computational power, user accessibility at various robot control levels and external sensor integration capability. Additionally, an on-line trajectory adaptation scheme is devised and implemented in the open architecture robot controller, enabling a real-time trajectory alteration of robot motion to be achieved in response to external sensory feedback. An in depth discussion is presented on integrating a stereo vision sensor with the robot controller to perform external sensor guided robot operations. Key issues for such a vision based robot system are precise synchronisation between the vision system and the robot controller, and correct target position prediction to counteract the inherent time delay in image processing. These were successfully addressed in a demonstrator system based on a Puma robot. Efforts have also been made to improve the Puma robot kinematic and dynamic performance. A simple, effective, on-line algorithm is developed for solving the inverse kinematics problem of a calibrated industrial robot to improve robot positioning accuracy. On the dynamic control aspect, a robust adaptive robot tracking control algorithm is derived that has an improved performance compared to a conventional PID controller as well as exhibiting relatively modest computational complexity. Experiments have been carried out to validate the open architecture robot controller and demonstrate the performance of the inverse kinematics algorithm, the adaptive servo control algorithm, and the on-line trajectory generation. By integrating the open architecture robot controller with a stereo vision sensor system, robot visual guidance has been achieved with experimental results showing that the integrated system is capable of detecting, tracking and intercepting random objects moving in 3D trajectory at a velocity up to 40mm/s
    • …
    corecore