1,713 research outputs found

    Multi-Focal Visual Servoing Strategies

    Get PDF
    Multi-focal vision provides two or more vision devices with different fields of view and measurement accuracies. A main advantage of this concept is a flexible allocation of these sensor resources accounting for the current situational and task performance requirements. Particularly, vision devices with large fields of view and low accuracies can be use

    Topics in Machining with Industrial Robot Manipulators and Optimal Motion Control

    Get PDF
    Two main topics are considered in this thesis: Machining with industrial robot manipulators and optimal motion control of robots and vehicles. The motivation for research on the first subject is the need for flexible and accurate production processes employing industrial robots as their main component. The challenge to overcome here is to achieve high-accuracy machining solutions, in spite of the strong process forces required for the task. Because of the process forces, the nonlinear dynamics of the manipulator, such as the joint compliance and backlash, may significantly degrade the achieved machining accuracy of the manufactured part. In this thesis, a macro/micro-manipulator configuration is considered to the purpose of increasing the milling accuracy. In particular, a model-based control architecture is developed for control of the macro/micro-manipulator setup. The considered approach is validated by experimental results from extensive milling experiments in aluminium and steel. Related to the problem of high-accuracy milling is the topic of robot modeling. To this purpose, two different approaches are considered; modeling of the quasi-static joint dynamics and dynamic compliance modeling. The first problem is approached by an identification method for determining the joint stiffness and backlash. The second problem is approached by using gray-box identification based on subspace-identification methods. Both identification algorithms are evaluated experimentally. Finally, online state estimation is considered as a means to determine the workspace position and orientation of the robot tool. Kalman Filters and Rao-Blackwellized Particle Filters are employed to the purpose of sensor fusion of internal robot measurements and measurements from an inertial measurement unit for estimation of the desired states. The approaches considered are fully implemented and evaluated on experimental data. The second part of the thesis discusses optimal motion control applied to robot manipulators and road vehicles. A control architecture for online control of a robot manipulator in high-performance path tracking is developed, and the architecture is evaluated in extensive simulations. The main characteristic of the control strategy is that it combines coordinated feedback control along both the tangential and transversal directions of the path; this separation is achieved in the framework of natural coordinates. One motivation for research on optimal control of road vehicles in time-critical maneuvers is the desire to develop improved vehicle-safety systems. In this thesis, a method for solving optimal maneuvering problems using nonlinear optimization is discussed. More specifically, vehicle and tire modeling and the optimization formulations required to get useful solutions to these problems are investigated. The considered method is evaluated on different combinations of chassis and tire models, in maneuvers under different road conditions, and for investigation of optimal maneuvers in systems for electronic stability control. The obtained optimization results in simulations are evaluated and compared

    Agent and object aware tracking and mapping methods for mobile manipulators

    Get PDF
    The age of the intelligent machine is upon us. They exist in our factories, our warehouses, our military, our hospitals, on our roads, and on the moon. Most of these things we call robots. When placed in a controlled or known environment such as an automotive factory or a distribution warehouse they perform their given roles with exceptional efficiency, achieving far more than is within reach of a humble human being. Despite the remarkable success of intelligent machines in such domains, they have yet to make a full-hearted deployment into our homes. The missing link between the robots we have now and the robots that are soon to come to our houses is perception. Perception as we mean it here refers to a level of understanding beyond the collection and aggregation of sensory data. Much of the available sensory information is noisy and unreliable, our homes contain many reflective surfaces, repeating textures on large flat surfaces, and many disruptive moving elements, including humans. These environments change over time, with objects frequently moving within and between rooms. This idea of change in an environment is fundamental to robotic applications, as in most cases we expect them to be effectors of such change. We can identify two particular challenges1 that must be solved for robots to make the jump to less structured environments - how to manage noise and disruptive elements in observational data, and how to understand the world as a set of changeable elements (objects) which move over time within a wider environment. In this thesis we look at one possible approach to solving each of these problems. For the first challenge we use proprioception aboard a robot with an articulated arm to handle difficult and unreliable visual data caused both by the robot and the environment. We use sensor data aboard the robot to improve the pose tracking of a visual system when the robot moves rapidly, with high jerk, or when observing a scene with little visual variation. For the second challenge, we build a model of the world on the level of rigid objects, and relocalise them both as they change location between different sequences and as they move. We use semantics, image keypoints, and 3D geometry to register and align objects between sequences, showing how their position has moved between disparate observations.Open Acces

    Ambiguity-Aware Multi-Object Pose Optimization for Visually-Assisted Robot Manipulation

    Full text link
    6D object pose estimation aims to infer the relative pose between the object and the camera using a single image or multiple images. Most works have focused on predicting the object pose without associated uncertainty under occlusion and structural ambiguity (symmetricity). However, these works demand prior information about shape attributes, and this condition is hardly satisfied in reality; even asymmetric objects may be symmetric under the viewpoint change. In addition, acquiring and fusing diverse sensor data is challenging when extending them to robotics applications. Tackling these limitations, we present an ambiguity-aware 6D object pose estimation network, PrimA6D++, as a generic uncertainty prediction method. The major challenges in pose estimation, such as occlusion and symmetry, can be handled in a generic manner based on the measured ambiguity of the prediction. Specifically, we devise a network to reconstruct the three rotation axis primitive images of a target object and predict the underlying uncertainty along each primitive axis. Leveraging the estimated uncertainty, we then optimize multi-object poses using visual measurements and camera poses by treating it as an object SLAM problem. The proposed method shows a significant performance improvement in T-LESS and YCB-Video datasets. We further demonstrate real-time scene recognition capability for visually-assisted robot manipulation. Our code and supplementary materials are available at https://github.com/rpmsnu/PrimA6D.Comment: IEEE Robotics and Automation Letter

    Modeling and Control of Flexible Link Manipulators

    Get PDF
    Autonomous maritime navigation and offshore operations have gained wide attention with the aim of reducing operational costs and increasing reliability and safety. Offshore operations, such as wind farm inspection, sea farm cleaning, and ship mooring, could be carried out autonomously or semi-autonomously by mounting one or more long-reach robots on the ship/vessel. In addition to offshore applications, long-reach manipulators can be used in many other engineering applications such as construction automation, aerospace industry, and space research. Some applications require the design of long and slender mechanical structures, which possess some degrees of flexibility and deflections because of the material used and the length of the links. The link elasticity causes deflection leading to problems in precise position control of the end-effector. So, it is necessary to compensate for the deflection of the long-reach arm to fully utilize the long-reach lightweight flexible manipulators. This thesis aims at presenting a unified understanding of modeling, control, and application of long-reach flexible manipulators. State-of-the-art dynamic modeling techniques and control schemes of the flexible link manipulators (FLMs) are discussed along with their merits, limitations, and challenges. The kinematics and dynamics of a planar multi-link flexible manipulator are presented. The effects of robot configuration and payload on the mode shapes and eigenfrequencies of the flexible links are discussed. A method to estimate and compensate for the static deflection of the multi-link flexible manipulators under gravity is proposed and experimentally validated. The redundant degree of freedom of the planar multi-link flexible manipulator is exploited to minimize vibrations. The application of a long-reach arm in autonomous mooring operation based on sensor fusion using camera and light detection and ranging (LiDAR) data is proposed.publishedVersio

    RUR53: an Unmanned Ground Vehicle for Navigation, Recognition and Manipulation

    Full text link
    This paper proposes RUR53: an Unmanned Ground Vehicle able to autonomously navigate through, identify, and reach areas of interest; and there recognize, localize, and manipulate work tools to perform complex manipulation tasks. The proposed contribution includes a modular software architecture where each module solves specific sub-tasks and that can be easily enlarged to satisfy new requirements. Included indoor and outdoor tests demonstrate the capability of the proposed system to autonomously detect a target object (a panel) and precisely dock in front of it while avoiding obstacles. They show it can autonomously recognize and manipulate target work tools (i.e., wrenches and valve stems) to accomplish complex tasks (i.e., use a wrench to rotate a valve stem). A specific case study is described where the proposed modular architecture lets easy switch to a semi-teleoperated mode. The paper exhaustively describes description of both the hardware and software setup of RUR53, its performance when tests at the 2017 Mohamed Bin Zayed International Robotics Challenge, and the lessons we learned when participating at this competition, where we ranked third in the Gran Challenge in collaboration with the Czech Technical University in Prague, the University of Pennsylvania, and the University of Lincoln (UK).Comment: This article has been accepted for publication in Advanced Robotics, published by Taylor & Franci

    Aspects of an open architecture robot controller and its integration with a stereo vision sensor.

    Get PDF
    The work presented in this thesis attempts to improve the performance of industrial robot systems in a flexible manufacturing environment by addressing a number of issues related to external sensory feedback and sensor integration, robot kinematic positioning accuracy, and robot dynamic control performance. To provide a powerful control algorithm environment and the support for external sensor integration, a transputer based open architecture robot controller is developed. It features high computational power, user accessibility at various robot control levels and external sensor integration capability. Additionally, an on-line trajectory adaptation scheme is devised and implemented in the open architecture robot controller, enabling a real-time trajectory alteration of robot motion to be achieved in response to external sensory feedback. An in depth discussion is presented on integrating a stereo vision sensor with the robot controller to perform external sensor guided robot operations. Key issues for such a vision based robot system are precise synchronisation between the vision system and the robot controller, and correct target position prediction to counteract the inherent time delay in image processing. These were successfully addressed in a demonstrator system based on a Puma robot. Efforts have also been made to improve the Puma robot kinematic and dynamic performance. A simple, effective, on-line algorithm is developed for solving the inverse kinematics problem of a calibrated industrial robot to improve robot positioning accuracy. On the dynamic control aspect, a robust adaptive robot tracking control algorithm is derived that has an improved performance compared to a conventional PID controller as well as exhibiting relatively modest computational complexity. Experiments have been carried out to validate the open architecture robot controller and demonstrate the performance of the inverse kinematics algorithm, the adaptive servo control algorithm, and the on-line trajectory generation. By integrating the open architecture robot controller with a stereo vision sensor system, robot visual guidance has been achieved with experimental results showing that the integrated system is capable of detecting, tracking and intercepting random objects moving in 3D trajectory at a velocity up to 40mm/s
    corecore