2,420 research outputs found

    Safe cooperation between human operators and visually controlled industrial manipulators

    Get PDF
    Industrial tasks can be improved substantially by making humans and robots collaborate in the same workspace. The main goal of this chapter is the development of a human-robot interaction system which enables this collaboration and guarantees the safety of the human operator. This system is composed of two subsystems: the human tracking system and the robot control system. The human tracking system deals with the precise real-time localization of the human operator in the industrial environment. It is composed of two systems: an inertial motion capture system and an Ultra-WideBand localization system. The robot control system is based on visual servoing. A safety behaviour which stops the normal path tracking of the robot is performed when the robot and the human are too close. This safety behaviour has been implemented through a multi-threaded software architecture in order to share information between both systems. Thereby, the localization measurements obtained by the human tracking system are processed by the robot control system to compute the minimum human-robot distance and determine if the safety behaviour must be activated.Spanish Ministry of Science and Innovation and the Spanish Ministry of Education through the projects DPI2005-06222 and DPI2008-02647 and the grant AP2005-1458

    Safety-Aware Human-Robot Collaborative Transportation and Manipulation with Multiple MAVs

    Full text link
    Human-robot interaction will play an essential role in various industries and daily tasks, enabling robots to effectively collaborate with humans and reduce their physical workload. Most of the existing approaches for physical human-robot interaction focus on collaboration between a human and a single ground robot. In recent years, very little progress has been made in this research area when considering aerial robots, which offer increased versatility and mobility compared to their grounded counterparts. This paper proposes a novel approach for safe human-robot collaborative transportation and manipulation of a cable-suspended payload with multiple aerial robots. We leverage the proposed method to enable smooth and intuitive interaction between the transported objects and a human worker while considering safety constraints during operations by exploiting the redundancy of the internal transportation system. The key elements of our system are (a) a distributed payload external wrench estimator that does not rely on any force sensor; (b) a 6D admittance controller for human-aerial-robot collaborative transportation and manipulation; (c) a safety-aware controller that exploits the internal system redundancy to guarantee the execution of additional tasks devoted to preserving the human or robot safety without affecting the payload trajectory tracking or quality of interaction. We validate the approach through extensive simulation and real-world experiments. These include as well the robot team assisting the human in transporting and manipulating a load or the human helping the robot team navigate the environment. To the best of our knowledge, this work is the first to create an interactive and safety-aware approach for quadrotor teams that physically collaborate with a human operator during transportation and manipulation tasks.Comment: Guanrui Li and Xinyang Liu contributed equally to this pape

    walk through programming for industrial applications

    Get PDF
    Abstract Collaboration between humans and robots is increasingly desired in several application domains, including the manufacturing domain. The paper describes a software control architecture for industrial robotic applications allowing human-robot cooperation during the programming phase of a robotic task. The control architecture is based on admittance control and tool dynamics compensation for implementing walk-through programming and manual guidance. Further steps to integrate this system on a real set-up include the robot kinematics and a socket communication that sends a binary file to the robot

    Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

    Get PDF
    Micro aerial vehicles (MAVs) are ideal platforms for surveillance and search and rescue in confined indoor and outdoor environments due to their small size, superior mobility, and hover capability. In such missions, it is essential that the MAV is capable of autonomous flight to minimize operator workload. Despite recent successes in commercialization of GPS-based autonomous MAVs, autonomous navigation in complex and possibly GPS-denied environments gives rise to challenging engineering problems that require an integrated approach to perception, estimation, planning, control, and high level situational awareness. Among these, state estimation is the first and most critical component for autonomous flight, especially because of the inherently fast dynamics of MAVs and the possibly unknown environmental conditions. In this thesis, we present methodologies and system designs, with a focus on state estimation, that enable a light-weight off-the-shelf quadrotor MAV to autonomously navigate complex unknown indoor and outdoor environments using only onboard sensing and computation. We start by developing laser and vision-based state estimation methodologies for indoor autonomous flight. We then investigate fusion from heterogeneous sensors to improve robustness and enable operations in complex indoor and outdoor environments. We further propose estimation algorithms for on-the-fly initialization and online failure recovery. Finally, we present planning, control, and environment coverage strategies for integrated high-level autonomy behaviors. Extensive online experimental results are presented throughout the thesis. We conclude by proposing future research opportunities

    Planetary Rover Inertial Navigation Applications: Pseudo Measurements and Wheel Terrain Interactions

    Get PDF
    Accurate localization is a critical component of any robotic system. During planetary missions, these systems are often limited by energy sources and slow spacecraft computers. Using proprioceptive localization (e.g., using an inertial measurement unit and wheel encoders) without external aiding is insufficient for accurate localization. This is mainly due to the integrated and unbounded errors of the inertial navigation solutions and the drifted position information from wheel encoders caused by wheel slippage. For this reason, planetary rovers often utilize exteroceptive (e.g., vision-based) sensors. On the one hand, localization with proprioceptive sensors is straightforward, computationally efficient, and continuous. On the other hand, using exteroceptive sensors for localization slows rover driving speed, reduces rover traversal rate, and these sensors are sensitive to the terrain features. Given the advantages and disadvantages of both methods, this thesis focuses on two objectives. First, improving the proprioceptive localization performance without significant changes to the rover operations. Second, enabling adaptive traversability rate based on the wheel-terrain interactions while keeping the localization reliable. To achieve the first objective, we utilized the zero-velocity, zero-angular rate updates, and non-holonomicity of a rover to improve rover localization performance even with the limited available sensor usage in a computationally efficient way. Pseudo-measurements generated from proprioceptive sensors when the rover is stationary conditions and the non-holonomic constraints while traversing can be utilized to improve the localization performance without any significant changes to the rover operations. Through this work, it is observed that a substantial improvement in localization performance, without the aid of additional exteroceptive sensor information. To achieve the second objective, the relationship between the estimation of localization uncertainty and wheel-terrain interactions through slip-ratio was investigated. This relationship was exposed with a Gaussian process with time series implementation by using the slippage estimation while the rover is moving. Then, it is predicted when to change from moving to stationary conditions by mapping the predicted slippage into localization uncertainty prediction. Instead of a periodic stopping framework, the method introduced in this work is a slip-aware localization method that enables the rover to stop more frequently in high-slip terrains whereas stops rover less frequently for low-slip terrains while keeping the proprioceptive localization reliable

    Wearable sensors for human–robot walking together

    Get PDF
    Thanks to recent technological improvements that enable novel applications beyond the industrial context, there is growing interest in the use of robots in everyday life situations. To improve the acceptability of personal service robots, they should seamlessly interact with the users, understand their social signals and cues and respond appropriately. In this context, a few proposals were presented to make robots and humans navigate together naturally without explicit user control, but no final solution has been achieved yet. To make an advance toward this end, this paper proposes the use of wearable Inertial Measurement Units to improve the interaction between human and robot while walking together without physical links and with no restriction on the relative position between the human and the robot. We built a prototype system, experimented with 19 human participants in two different tasks, to provide real-time evaluation of gait parameters for a mobile robot moving together with a human, and studied the feasibility and the perceived usability by the participants. The results show the feasibility of the system, which obtained positive feedback from the users, giving valuable information for the development of a natural interaction system where the robot perceives human movements by means of wearable sensors
    • …
    corecore