30 research outputs found

    Visual Servoing of Humanoid Dual-Arm Robot with Neural Learning Enhanced Skill Transferring Control

    Get PDF
    This paper presents a novel combination of visual servoing (VS) control and neural network (NN) learning on humanoid dual-arm robot. A VS control system is built by using stereo vision to obtain the 3D point cloud of a target object. A least square-based method is proposed to reduce the stochastic error in workspace calibration. An NN controller is designed to compensate for the effect of uncertainties in payload and other parameters (both internal and external) during the tracking control. In contrast to the conventional NN controller, a deterministic learning technique is utilized in this work, to enable the learned neural knowledge to be reused before current dynamics changes. A skill transfer mechanism is also developed to apply the neural learned knowledge from one arm to the other, to increase the neural learning efficiency. Tracked trajectory of object is used to provide target position to the coordinated dual arms of a Baxter robot in the experimental study. Robotic implementations has demonstrated the efficiency of the developed VS control system and has verified the effectiveness of the proposed NN controller with knowledge-reuse and skill transfer features

    Neural-Learning-Based Telerobot Control with Guaranteed Performance

    Get PDF
    © 2013 IEEE. In this paper, a neural networks (NNs) enhanced telerobot control system is designed and tested on a Baxter robot. Guaranteed performance of the telerobot control system is achieved at both kinematic and dynamic levels. At kinematic level, automatic collision avoidance is achieved by the control design at the kinematic level exploiting the joint space redundancy, thus the human operator would be able to only concentrate on motion of robot's end-effector without concern on possible collision. A posture restoration scheme is also integrated based on a simulated parallel system to enable the manipulator restore back to the natural posture in the absence of obstacles. At dynamic level, adaptive control using radial basis function NNs is developed to compensate for the effect caused by the internal and external uncertainties, e.g., unknown payload. Both the steady state and the transient performance are guaranteed to satisfy a prescribed performance requirement. Comparative experiments have been performed to test the effectiveness and to demonstrate the guaranteed performance of the proposed methods

    Multi-Objective Convolutional Neural Networks for Robot Localisation and 3D Position Estimation in 2D Camera Images

    Full text link
    The field of collaborative robotics and human-robot interaction often focuses on the prediction of human behaviour, while assuming the information about the robot setup and configuration being known. This is often the case with fixed setups, which have all the sensors fixed and calibrated in relation to the rest of the system. However, it becomes a limiting factor when the system needs to be reconfigured or moved. We present a deep learning approach, which aims to solve this issue. Our method learns to identify and precisely localise the robot in 2D camera images, so having a fixed setup is no longer a requirement and a camera can be moved. In addition, our approach identifies the robot type and estimates the 3D position of the robot base in the camera image as well as 3D positions of each of the robot joints. Learning is done by using a multi-objective convolutional neural network with four previously mentioned objectives simultaneously using a combined loss function. The multi-objective approach makes the system more flexible and efficient by reusing some of the same features and diversifying for each objective in lower layers. A fully trained system shows promising results in providing an accurate mask of where the robot is located and an estimate of its base and joint positions in 3D. We compare the results to our previous approach of using cascaded convolutional neural networks.Comment: Ubiquitous Robots 2018 Regular paper submissio

    Development of Sensory-Motor Fusion-Based Manipulation and Grasping Control for a Robotic Hand-Eye System

    Get PDF

    User Experience Enchanced Interface ad Controller Design for Human-Robot Interaction

    Get PDF
    The robotic technologies have been well developed recently in various fields, such as medical services, industrial manufacture and aerospace. Despite their rapid development, how to deal with the uncertain envi-ronment during human-robot interactions effectively still remains un-resolved. The current artificial intelligence (AI) technology does not support robots to fulfil complex tasks without human’s guidance. Thus, teleoperation, which means remotely controlling a robot by a human op-erator, is indispensable in many scenarios. It is an important and useful tool in research fields. This thesis focuses on the study of designing a user experience (UX) enhanced robot controller, and human-robot in-teraction interfaces that try providing human operators an immersion perception of teleoperation. Several works have been done to achieve the goal.First, to control a telerobot smoothly, a customised variable gain con-trol method is proposed where the stiffness of the telerobot varies with the muscle activation level extracted from signals collected by the surface electromyograph(sEMG) devices. Second, two main works are conducted to improve the user-friendliness of the interaction interfaces. One is that force feedback is incorporated into the framework providing operators with haptic feedback to remotely manipulate target objects. Given the high cost of force sensor, in this part of work, a haptic force estimation algorithm is proposed where force sensor is no longer needed. The other main work is developing a visual servo control system, where a stereo camera is mounted on the head of a dual arm robots offering operators real-time working situations. In order to compensate the internal and ex-ternal uncertainties and accurately track the stereo camera’s view angles along planned trajectories, a deterministic learning techniques is utilised, which enables reusing the learnt knowledge before current dynamics changes and thus features increasing the learning efficiency. Third, in-stead of sending commands to the telerobts by joy-sticks, keyboards or demonstrations, the telerobts are controlled directly by the upper limb motion of the human operator in this thesis. Algorithm that utilised the motion signals from inertial measurement unit (IMU) sensor to captures humans’ upper limb motion is designed. The skeleton of the operator is detected by Kinect V2 and then transformed and mapped into the joint positions of the controlled robot arm. In this way, the upper limb mo-tion signals from the operator is able to act as reference trajectories to the telerobts. A more superior neural networks (NN) based trajectory controller is also designed to track the generated reference trajectory. Fourth, to further enhance the human immersion perception of teleop-eration, the virtual reality (VR) technique is incorporated such that the operator can make interaction and adjustment of robots easier and more accurate from a robot’s perspective.Comparative experiments have been performed to demonstrate the effectiveness of the proposed design scheme. Tests with human subjects were also carried out for evaluating the interface design

    A Hierarchical Architecture for Flexible Human-Robot Collaboration

    Get PDF
    This thesis is devoted to design a software architecture for Human- Robot Collaboration (HRC), to enhance the robots\u2019 abilities for working alongside humans. We propose FlexHRC, a hierarchical and flexible human-robot cooperation architecture specifically designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in tasks with high-variability. Along with FlexHRC, we have introduced novel techniques appropriate for three interleaved levels, namely perception, representation, and action, each one aimed at addressing specific traits of humanrobot cooperation tasks. The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative robots could bring to the whole production process. In this context, a yet unreached enabling technology is the design of robots able to deal at all levels with humans\u2019 intrinsic variability, which is not only a necessary element to a comfortable working experience for humans but also a precious capability for efficiently dealing with unexpected events. Moreover, a flexible assembly of semi-finished products is one of the expected features of next-generation shop-floor lines. Currently, such flexibility is placed on the shoulders of human operators, who are responsible for product variability, and therefore they are subject to potentially high stress levels and cognitive load when dealing with complex operations. At the same time, operations in the shop-floor are still very structured and well-defined. Collaborative robots have been designed to allow for a transition of such burden from human operators to robots that are flexible enough to support them in high-variability tasks while they unfold. As mentioned before, FlexHRC architecture encompasses three perception, action, and representation levels. The perception level relies on wearable sensors for human action recognition and point cloud data for perceiving the object in the scene. The action level embraces four components, the robot execution manager for decoupling action planning from robot motion planning and mapping the symbolic actions to the robot controller command interface, a task Priority framework to control the robot, a differential equation solver to simulate and evaluate the robot behaviour on-the-fly, and finally a random-based method for the robot path planning. The representation level depends on AND/OR graphs for the representation of and the reasoning upon human-robot cooperation models online, a task manager to plan, adapt, and make decision for the robot behaviors, and a knowledge base in order to store the cooperation and workspace information. We evaluated the FlexHRC functionalities according to the application desired objectives. This evaluation is accompanied with several experiments, namely collaborative screwing task, coordinated transportation of the objects in cluttered environment, collaborative table assembly task, and object positioning tasks. The main contributions of this work are: (i) design and implementation of FlexHRC which enables the functional requirements necessary for the shop-floor assembly application such as task and team level flexibility, scalability, adaptability, and safety just a few to name, (ii) development of the task representation, which integrates a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic, (iii) an in-the-loop simulation-based decision making process for the operations of collaborative robots coping with the variability of human operator actions, (iv) the robot adaptation to the human on-the-fly decisions and actions via human action recognition, and (v) the predictable robot behavior to the human user thanks to the task priority based control frame, the introduced path planner, and the natural and intuitive communication of the robot with the human

    Biomimetic Manipulator Control Design for Bimanual Tasks in the Natural Environment

    Get PDF
    As robots become more prolific in the human environment, it is important that safe operational procedures are introduced at the same time; typical robot control methods are often very stiff to maintain good positional tracking, but this makes contact (purposeful or accidental) with the robot dangerous. In addition, if robots are to work cooperatively with humans, natural interaction between agents will make tasks easier to perform with less effort and learning time. Stability of the robot is particularly important in this situation, especially as outside forces are likely to affect the manipulator when in a close working environment; for example, a user leaning on the arm, or task-related disturbance at the end-effector. Recent research has discovered the mechanisms of how humans adapt the applied force and impedance during tasks. Studies have been performed to apply this adaptation to robots, with promising results showing an improvement in tracking and effort reduction over other adaptive methods. The basic algorithm is straightforward to implement, and allows the robot to be compliant most of the time and only stiff when required by the task. This allows the robot to work in an environment close to humans, but also suggests that it could create a natural work interaction with a human. In addition, no force sensor is needed, which means the algorithm can be implemented on almost any robot. This work develops a stable control method for bimanual robot tasks, which could also be applied to robot-human interactive tasks. A dynamic model of the Baxter robot is created and verified, which is then used for controller simulations. The biomimetic control algorithm forms the basis of the controller, which is developed into a hybrid control system to improve both task-space and joint-space control when the manipulator is disturbed in the natural environment. Fuzzy systems are implemented to remove the need for repetitive and time consuming parameter tuning, and also allows the controller to actively improve performance during the task. Experimental simulations are performed, and demonstrate how the hybrid task/joint-space controller performs better than either of the component parts under the same conditions. The fuzzy tuning method is then applied to the hybrid controller, which is shown to slightly improve performance as well as automating the gain tuning process. In summary, a novel biomimetic hybrid controller is presented, with a fuzzy mechanism to avoid the gain tuning process, finalised with a demonstration of task-suitability in a bimanual-type situation.EPSR

    Path Following for Robot Manipulators Using Gyroscopic Forces

    Get PDF
    This thesis deals with the path following problem the objective of which is to make the end effector of a robot manipulator trace a desired path while maintaining a desired orientation. The fact that the pose of the end effector is described in the task space while the control inputs are in the joint space presents difficulties to the movement coordination. Typically, one needs to perform inverse kinematics in path planning and inverse dynamics in movement execution. However, the former can be ill-posed in the presence of redundancy and singularities, and the latter relies on accurate models of the manipulator system which are often difficult to obtain. This thesis presents an alternative control scheme that is directly formulated in the task space and is free of inverse transformations. As a result, it is especially suitable for operations in a dynamic environment that may require online adjustment of the task objective. The proposed strategy uses the transpose Jacobian control (or potential energy shaping) as the base controller to ensure the convergence of the end effector pose, and adds a gyroscopic force to steer the motion. Gyroscopic forces are a special type of force that does not change the mechanical energy of the system, so its addition to the base controller does not affect the stability of the controlled mechanical system. In this thesis, we emphasize the fact that the gyroscopic force can be effectively used to control the pose of the end effector during motion. We start with the case where only the position of the end effector is of interest, and extend the technique to the control over both position and orientation. Simulation and experimental results using planar manipulators as well as anthropomorphic arms are presented to verify the effectiveness of the proposed controller

    Robotic manipulators for single access surgery

    Get PDF
    This thesis explores the development of cooperative robotic manipulators for enhancing surgical precision and patient outcomes in single-access surgery and, specifically, Transanal Endoscopic Microsurgery (TEM). During these procedures, surgeons manipulate a heavy set of instruments via a mechanical clamp inserted in the patient’s body through a surgical port, resulting in imprecise movements, increased patient risks, and increased operating time. Therefore, an articulated robotic manipulator with passive joints is initially introduced, featuring built-in position and force sensors in each joint and electronic joint brakes for instant lock/release capability. The articulated manipulator concept is further improved with motorised joints, evolving into an active tool holder. The joints allow the incorporation of advanced robotic capabilities such as ultra-lightweight gravity compensation and hands-on kinematic reconfiguration, which can optimise the placement of the tool holder in the operating theatre. Due to the enhanced sensing capabilities, the application of the active robotic manipulator was further explored in conjunction with advanced image guidance approaches such as endomicroscopy. Recent advances in probe-based optical imaging such as confocal endomicroscopy is making inroads in clinical uses. However, the challenging manipulation of imaging probes hinders their practical adoption. Therefore, a combination of the fully cooperative robotic manipulator with a high-speed scanning endomicroscopy instrument is presented, simplifying the incorporation of optical biopsy techniques in routine surgical workflows. Finally, another embodiment of a cooperative robotic manipulator is presented as an input interface to control a highly-articulated robotic instrument for TEM. This master-slave interface alleviates the drawbacks of traditional master-slave devices, e.g., using clutching mechanics to compensate for the mismatch between slave and master workspaces, and the lack of intuitive manipulation feedback, e.g. joint limits, to the user. To address those drawbacks a joint-space robotic manipulator is proposed emulating the kinematic structure of the flexible robotic instrument under control.Open Acces
    corecore