1,091 research outputs found

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Efficient and secure real-time mobile robots cooperation using visual servoing

    Get PDF
    This paper deals with the challenging problem of navigation in formation of mobiles robots fleet. For that purpose, a secure approach is used based on visual servoing to control velocities (linear and angular) of the multiple robots. To construct our system, we develop the interaction matrix which combines the moments in the image with robots velocities and we estimate the depth between each robot and the targeted object. This is done without any communication between the robots which eliminate the problem of the influence of each robot errors on the whole. For a successful visual servoing, we propose a powerful mechanism to execute safely the robots navigation, exploiting a robot accident reporting system using raspberry Pi3. In addition, in case of problem, a robot accident detection reporting system testbed is used to send an accident notification, in the form of a specifical message. Experimental results are presented using nonholonomic mobiles robots with on-board real time cameras, to show the effectiveness of the proposed method

    Real-Time Visual Servo Control of Two-Link and Three DOF Robot Manipulator

    Get PDF
    This project presents experimental results of position-based visual servoing control process of a 3R robot using 2 fixed cameras. Visual servoing concerns several field of research including vision systems, robotics and automatic control. This method deal with real time changes in the relative position of the target-object with respect to robot. It is have good accuracy with independency of Manipulator servo control structure from the target pose coordinates are the additional advantages of this method. The applications of visually guided systems are many: from intelligent homes to automotive industry. Visual servoing are also useful for a wide range of applications and it can be used to control many different systems (manipulator arms, mobile robots, aircraft, etc.). Visual servoing systems are generally divide depends on the number of camera, on the position of the camera with respect to the robot, on the design of the error function to robot. This project presents an approach for visual robot control. Existing approaches are increased in such a way that depth and position information of block or object is estimate during the motion of the robot. That is done by the visual tracking of an object throughout the trajectory. Vision designed robotics has been a major research area for more time. However, one of the open and commonly problems in the area is the requirement for exchange of the experiences and ideas. We also include a number of real–time examples from our own research. Forward and inverse kinematics of 3 DOF robot have been done then experiments on image processing, object shape recognition and pose estimation as well as target-block or object in Cartesian system and visual control of robot manipulator have been prescribed. Experimental results obtained from real-time system implementation of visual servo control and tests of 3DOF robot in lab

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Electric Vehicle Battery Disassembly Using Interfacing Toolbox for Robotic Arms

    Get PDF
    This paper showcases the integration of the Interfacing Toolbox for Robotic Arms (ITRA) with our newly developed hybrid Visual Servoing (VS) methods to automate the disassembly of electric vehicle batteries, thereby advancing sustainability and fostering a circular economy. ITRA enhances collaboration between industrial robotic arms, server computers, sensors, and actuators, meeting the intricate demands of robotic disassembly, including the essential real-time tracking of components and robotic arms. We demonstrate the effectiveness of our hybrid VS approach, combined with ITRA, in the context of Electric Vehicle (EV) battery disassembly across two robotic testbeds. The first employs a KUKA KR10 robot for precision tasks, while the second utilizes a KUKA KR500 for operations needing higher payload capacity. Conducted in T1 (Manual Reduced Velocity) mode, our experiments underscore a swift communication protocol that links low-level and high-level control systems, thus enabling rapid object detection and tracking. This allows for the efficient completion of disassembly tasks, such as removing the EV battery’s top case in 27 s and disassembling a stack of modules in 32 s. The demonstrated success of our framework highlights its extensive applicability in robotic manufacturing sectors that demand precision and adaptability, including medical robotics, extreme environments, aerospace, and construction

    Robotic execution for everyday tasks by means of external vision/force control

    Get PDF
    In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms [1] are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach [2], based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are pre- sented

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration
    corecore