1,355 research outputs found
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts
This paper presents a multi-robot system for manufacturing personalized
medical stent grafts. The proposed system adopts a modular design, which
includes: a (personalized) mandrel module, a bimanual sewing module, and a
vision module. The mandrel module incorporates the personalized geometry of
patients, while the bimanual sewing module adopts a learning-by-demonstration
approach to transfer human hand-sewing skills to the robots. The human
demonstrations were firstly observed by the vision module and then encoded
using a statistical model to generate the reference motion trajectories. During
autonomous robot sewing, the vision module plays the role of coordinating
multi-robot collaboration. Experiment results show that the robots can adapt to
generalized stent designs. The proposed system can also be used for other
manipulation tasks, especially for flexible production of customized products
and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial
Informatics, Key words: modularity, medical device customization, multi-robot
system, robot learning, visual servoing, robot sewin
Manipulating Highly Deformable Materials Using a Visual Feedback Dictionary
The complex physical properties of highly deformable materials such as
clothes pose significant challenges fanipulation systems. We present a novel
visual feedback dictionary-based method for manipulating defoor autonomous
robotic mrmable objects towards a desired configuration. Our approach is based
on visual servoing and we use an efficient technique to extract key features
from the RGB sensor stream in the form of a histogram of deformable model
features. These histogram features serve as high-level representations of the
state of the deformable material. Next, we collect manipulation data and use a
visual feedback dictionary that maps the velocity in the high-dimensional
feature space to the velocity of the robotic end-effectors for manipulation. We
have evaluated our approach on a set of complex manipulation tasks and
human-robot manipulation tasks on different cloth pieces with varying material
characteristics.Comment: The video is available at goo.gl/mDSC4
A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts
This paper presents a multi-robot system for manufacturing personalized
medical stent grafts. The proposed system adopts a modular design, which
includes: a (personalized) mandrel module, a bimanual sewing module, and a
vision module. The mandrel module incorporates the personalized geometry of
patients, while the bimanual sewing module adopts a learning-by-demonstration
approach to transfer human hand-sewing skills to the robots. The human
demonstrations were firstly observed by the vision module and then encoded
using a statistical model to generate the reference motion trajectories. During
autonomous robot sewing, the vision module plays the role of coordinating
multi-robot collaboration. Experiment results show that the robots can adapt to
generalized stent designs. The proposed system can also be used for other
manipulation tasks, especially for flexible production of customized products
and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial
Informatics, Key words: modularity, medical device customization, multi-robot
system, robot learning, visual servoing, robot sewin
Design of an Anthropomorphic, Compliant, and Lightweight Dual Arm for Aerial Manipulation
This paper presents an anthropomorphic, compliant and lightweight dual arm manipulator designed and developed for aerial manipulation applications with multi-rotor platforms. Each arm provides four degrees of freedom in a human-like kinematic configuration for end effector positioning: shoulder pitch, roll and yaw, and elbow pitch. The dual arm, weighting 1.3 kg in total, employs smart servo actuators and a customized and carefully designed aluminum frame structure manufactured by laser cut. The proposed
design reduces the manufacturing cost as no computer numerical control machined part is used. Mechanical joint compliance is provided in all the joints, introducing a compact spring-lever transmission mechanism between the servo shaft and the links, integrating a potentiometer for measuring the deflection of the joints.
The servo actuators are partially or fully isolated against impacts and overloads thanks to the ange bearings attached to the frame structure that support the rotation of the links and the deflection of the joints. This simple mechanism increases the robustness of the arms and safety in the physical interactions between the aerial
robot and the environment. The developed manipulator has been validated through different experiments in fixed base test-bench and in outdoor flight tests.Unión Europea H2020-ICT-2014- 644271Ministerio de EconomÃa y Competitividad DPI2015-71524-RMinisterio de EconomÃa y Competitividad DPI2017-89790-
Hybrid visual servoing with hierarchical task composition for aerial manipulation
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e. for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.Peer ReviewedPostprint (author's final draft
Robotic execution for everyday tasks by means of external vision/force control
In this article, we present an integrated manipulation framework for a service robot, that allows to
interact with articulated objects at home environments
through the coupling of vision and force modalities. We
consider a robot which is observing simultaneously his
hand and the object to manipulate, by using an external
camera (i.e. robot head). Task-oriented grasping algorithms [1] are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach [2], based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in
models and positioning. A position-based visual servoing
control law has been designed in order to continuously
align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being
executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real
robot interacting with different kind of doors are pre-
sented
- …