185 research outputs found

    Haptic Device Design and Teleoperation Control Algorithms for Mobile Manipulators

    Get PDF
    The increasing need of teleoperated robotic systems implies more and more often to use, as slave devices, mobile platforms (terrestrial, aerial or underwater) with integrated manipulation capabilities, provided e.g. by robotic arms with proper grasping/manipulation tools. Despite this, the research activity in teleoperation of robotic systems has mainly focused on the control of either fixed-base manipulators or mobile robots, non considering the integration of these two types of systems in a single device. Such a combined robotic devices are usually referred to as mobile manipulators: systems composed by both a robotic manipulator and a mobile platform (on which the arm is mounted) whose purpose is to enlarge the manipulatorā€™s workspace. The combination of a mobile platform and a serial manipulator creates redundancy: a particular point in the space can be reached by moving the manipulator, by moving the mobile platform, or by a combined motion of both. A synchronized motion of both devices need then to be addressed. Although specific haptic devices explicitly oriented to the control of mobile manipulators need to be designed, there are no commercial solution yet. For this reason it is often necessary to control such as combined systems with traditional haptic devices not specifically oriented to the control of mobile manipulators. The research activity presented in this Ph.D. thesis focuses in the first place on the design of a teleoperation control scheme which allows the simultaneous control of both the manipulator and the mobile platform by means of a single haptic device characterized by fixed base and an open kinematic chain. Secondly the design of a novel cable-drive haptic devices has been faced. Investigating the use of twisted strings actuation in force rendering is the most interesting challenge of the latter activity

    Unlimited-wokspace teleoperation

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Mechanical Engineering, Izmir, 2012Includes bibliographical references (leaves: 100-105)Text in English; Abstract: Turkish and Englishxiv, 109 leavesTeleoperation is, in its brief description, operating a vehicle or a manipulator from a distance. Teleoperation is used to reduce mission cost, protect humans from accidents that can be occurred during the mission, and perform complex missions for tasks that take place in areas which are difficult to reach or dangerous for humans. Teleoperation is divided into two main categories as unilateral and bilateral teleoperation according to information flow. This flow can be configured to be in either one direction (only from master to slave) or two directions (from master to slave and from slave to master). In unlimited-workspace teleoperation, one of the types of bilateral teleoperation, mobile robots are controlled by the operator and environmental information is transferred from the mobile robot to the operator. Teleoperated vehicles can be used in a variety of missions in air, on ground and in water. Therefore, different constructional types of robots can be designed for the different types of missions. This thesis aims to design and develop an unlimited-workspace teleoperation which includes an omnidirectional mobile robot as the slave system to be used in further researches. Initially, an omnidirectional mobile robot was manufactured and robot-operator interaction and efficient data transfer was provided with the established communication line. Wheel velocities were measured in real-time by Hall-effect sensors mounted on robot chassis to be integrated in controllers. A dynamic obstacle detection system, which is suitable for omnidirectional mobility, was developed and two obstacle avoidance algorithms (semi-autonomous and force reflecting) were created and tested. Distance information between the robot and the obstacles was collected by an array of sensors mounted on the robot. In the semi-autonomous teleoperation scenario, distance information is used to avoid obstacles autonomously and in the force-reflecting teleoperation scenario obstacles are informed to the user by sending back the artificially created forces acting on the slave robot. The test results indicate that obstacle avoidance performance of the developed vehicle with two algorithms is acceptable in all test scenarios. In addition, two control models were developed (kinematic and dynamic control) for the local controller of the slave robot. Also, kinematic controller was supported by gyroscope

    Migration from Teleoperation to Autonomy via Modular Sensor and Mobility Bricks

    Get PDF
    In this thesis, the teleoperated communications of a Remotec ANDROS robot have been reverse engineered. This research has used the information acquired through the reverse engineering process to enhance the teleoperation and add intelligence to the initially automated robot. The main contribution of this thesis is the implementation of the mobility brick paradigm, which enables autonomous operations, using the commercial teleoperated ANDROS platform. The brick paradigm is a generalized architecture for a modular approach to robotics. This architecture and the contribution of this thesis are a paradigm shift from the proprietary commercial models that exist today. The modular system of sensor bricks integrates the transformed mobility platform and defines it as a mobility brick. In the wall following application implemented in this work, the mobile robotic system acquires intelligence using the range sensor brick. This application illustrates a way to alleviate the burden on the human operator and delegate certain tasks to the robot. Wall following is one among several examples of giving a degree of autonomy to an essentially teleoperated robot through the Sensor Brick System. Indeed once the proprietary robot has been altered into a mobility brick; the possibilities for autonomy are numerous and vary with different sensor bricks. The autonomous system implemented is not a fixed-application robot but rather a non-specific autonomy capable platform. Meanwhile the native controller and the computer-interfaced teleoperation are still available when necessary. Rather than trading off by switching from teleoperation to autonomy, this system provides the flexibility to switch between the two at the operatorā€™s command. The contributions of this thesis reside in the reverse engineering of the original robot, its upgrade to a computer-interfaced teleoperated system, the mobility brick paradigm and the addition of autonomy capabilities. The application of a robot autonomously following a wall is subsequently implemented, tested and analyzed in this work. The analysis provides the programmer with information on controlling the robot and launching the autonomous function. The results are conclusive and open up the possibilities for a variety of autonomous applications for mobility platforms using modular sensor bricks

    GRASP News Volume 9, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory

    \u3cem\u3eGRASP News\u3c/em\u3e: Volume 9, Number 1

    Get PDF
    The past year at the GRASP Lab has been an exciting and productive period. As always, innovation and technical advancement arising from past research has lead to unexpected questions and fertile areas for new research. New robots, new mobile platforms, new sensors and cameras, and new personnel have all contributed to the breathtaking pace of the change. Perhaps the most significant change is the trend towards multi-disciplinary projects, most notable the multi-agent project (see inside for details on this, and all the other new and on-going projects). This issue of GRASP News covers the developments for the year 1992 and the first quarter of 1993

    Haptic teleoperation of mobile manipulator systems using virtual fixtures.

    Get PDF
    In order to make the task of controlling Mobile-Manipulator Systems (MMS) simpler, a novel command strategy that uses a single joystick is presented to replace the existing paradigm of using multiple joysticks. To improve efficiency and accuracy, virtual fixtures were implemented with the use of a haptic joystick. Instead of modeling the MMS as a single unit with three redundant degrees-of-freedom (DOF), the operator controls either the manipulator or the mobile base, with the command strategy choosing which one to move. The novel command strategy uses three modes of operation to automatically switch control between the manipulator and base. The three modes of operation are called near-target manipulation mode, off-target manipulation mode, and transportation mode. The system enters near-target manipulation mode only when close to a target of interest, and allows the operator to control the manipulator using velocity control. When the operator attempts to move the manipulator out of its workspace limits, the system temporarily enters transportation mode. When the operator moves the manipulator in a direction towards the manipulator???s workspace the system returns to near-target manipulation mode. In off-target manipulation mode, when the operator moves the manipulator to its workspace limits, the system retracts the arm near to the centre of its workspace to enter and remain in transportation mode. While in transportation mode the operator controls the base using velocity control. Two types of virtual fixtures are used, repulsive virtual fixtures and forbidden region virtual fixtures. Repulsive virtual fixtures are present in the form of six virtual walls forming a cube at the manipulator???s workspace limits. When the operator approaches a virtual wall, a repulsive force is felt pushing the operator???s hand away from the workspace limits. The forbidden region virtual fixtures prevent the operator from driving into obstacles by disregarding motion commands that would result in a collision. The command strategy was implemented on the Omnibot MMS and test results show that it was successful in improving simplicity, accuracy, and efficiency when teleoperating a MMS

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operatorā€™s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robotā€™s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Vision-based Global Path Planning and Trajectory Generation for Robotic Applications in Hazardous Environments

    Get PDF
    The aim of this study is to ļ¬nd an eļ¬ƒcient global path planning algorithm and trajectory generation method using a vision system. Path planning is part of the more generic navigation function of mobile robots that consists of establishing an obstacle-free path, starting from the initial pose to the target pose in the robot workspace.In this thesis, special emphasis is placed on robotic applications in industrial and scientiļ¬c infrastructure environments that are hazardous and inaccessible to humans, such as nuclear power plants and ITER1 and CERN2 LHC3 tunnel. Nuclear radiation can cause deadly damage to the human body, but we have to depend on nuclear energy to meet our great demands for energy resources. Therefore, the research and development of automatic transfer robots and manipulations under nuclear environment are regarded as a key technology by many countries in the world. Robotic applications in radiation environments minimize the danger of radiation exposure to humans. However, the robots themselves are also vulnerable to radiation. Mobility and maneuverability in such environments are essential to task success. Therefore, an eļ¬ƒcient obstacle-free path and trajectory generation method are necessary for ļ¬nding a safe path with maximum bounded velocities in radiation environments. High degree of freedom manipulators and maneuverable mobile robots with steerable wheels, such as non-holonomic omni-directional mobile robots make them suitable for inspection and maintenance tasks where the camera is the only source of visual feedback.In this thesis, a novel vision-based path planning method is presented by utilizing the artiļ¬cial potential ļ¬eld, the visual servoing concepts and the CAD-based recognition method to deal with the problem of path and trajectory planning. Unlike the majority of conventional trajectory planning methods that consider a robot as only one point, the entire shape of a mobile robot is considered by taking into account all of the robotā€™s desired points to avoid obstacles. The vision-based algorithm generates synchronized trajectories for all of the wheels on omni-directional mobile robot. It provides the robotā€™s kinematic variables to plan maximum allowable velocities so that at least one of the actuators is always working at maximum velocity. The advantage of generated synchronized trajectories is to avoid slippage and misalignment in translation and rotation movement. The proposed method is further developed to propose a new vision-based path coordination method for multiple mobile robots with independently steerable wheels to avoid mutual collisions as well as stationary obstacles. The results of this research have been published to propose a new solution for path and trajectory generation in hazardous and inaccessible to human environments where the one camera is the only source of visual feedback
    • ā€¦
    corecore