138 research outputs found

    Image-Based Visual-Impedance Control of a Dual-Arm Aerial Manipulator

    Get PDF
    Three new image-based visual-impedance control laws are proposed in this paper allowing physical interaction of a dual-arm unmanned aerial manipulator equipped with a camera and a force/torque sensor. Namely, two first-order impedance behaviours are designed based on the transpose and the inverse of the system Jacobian matrix, respectively, while a second-order impedance behaviour is carried out as well. Visual information is employed both to coordinate the camera motion in an eye- in-hand configuration with the assigned task executed by the other robot arm, and to define the elastic wrench component of the proposed hybrid impedance equations directly in the image plane

    Aerial Robotics – Unmanned Aerial Vehicles in Interaction with the Environment

    Get PDF
    Defined as technology that provides services and facilitates the execution of tasks (such as observation, inspection, mapping, search and rescue, maintenance, etc.) by using unmanned aerial vehicles equipped with various sensors and actuators, aerial robotics in one of the fastest growing field in research as well as in the industry. While some of the services provided by aerial robots have already been put into practice (for example aerial inspection and aerial mapping), others (like aerial manipulation) are still at the level of laboratory experimentation on account of their complexity. The ability of an aerial robotic system to interact physically with objects within its surroundings completely transforms the way we view applications of unmanned aerial systems in near-Earth environments. This change in paradigm conveying such new functionalities as aerial tactile inspection; aerial repair, construction, and assembly; aerial agricultural care; and aerial urban sanitation requires an extension of current modeling and control techniques as well as the development of novel concepts. In this article we are giving a very brief introduction to the field of aerial robots

    Virtual Reality via Object Pose Estimation and Active Learning:Realizing Telepresence Robots with Aerial Manipulation Capabilities

    Get PDF
    This paper presents a novel telepresence system for advancing aerial manipulation indynamic and unstructured environments. The proposed system not only features a haptic device, but also a virtual reality (VR) interface that provides real-time 3D displays of the robot’s workspace as well as a haptic guidance to its remotely located operator. To realize this, multiple sensors, namely, a LiDAR, cameras, and IMUs are utilized. For processing of the acquired sensory data, pose estimation pipelines are devised for industrial objects of both known and unknown geometries. We further propose an active learning pipeline in order to increase the sample efficiency of a pipeline component that relies on a Deep Neural Network (DNN) based object detector. All these algorithms jointly address various challenges encountered during the execution of perception tasks in industrial scenarios. In the experiments, exhaustive ablation studies are provided to validate the proposed pipelines. Method-ologically, these results commonly suggest how an awareness of the algorithms’ own failures and uncertainty (“introspection”) can be used to tackle the encountered problems. Moreover, outdoor experiments are conducted to evaluate the effectiveness of the overall system in enhancing aerial manipulation capabilities. In particular, with flight campaigns over days and nights, from spring to winter, and with different users and locations, we demonstrate over 70 robust executions of pick-and-place, force application and peg-in-hole tasks with the DLR cable-Suspended Aerial Manipulator (SAM). As a result, we show the viability of the proposed system in future industrial applications

    RAMP: a benchmark for evaluating robotic assembly manipulation and planning

    Get PDF
    We introduce RAMP, an open-source robotics benchmark inspired by real-world industrial assembly tasks. RAMP consists of beams that a robot must assemble into specified goal configurations using pegs as fasteners. As such, it assesses planning and execution capabilities, and poses challenges in perception, reasoning, manipulation, diagnostics, fault recovery, and goal parsing. RAMP has been designed to be accessible and extensible. Parts are either 3D printed or otherwise constructed from materials that are readily obtainable. The design of parts and detailed instructions are publicly available. In order to broaden community engagement, RAMP incorporates fixtures such as April Tags which enable researchers to focus on individual sub-tasks of the assembly challenge if desired. We provide a full digital twin as well as rudimentary baselines to enable rapid progress. Our vision is for RAMP to form the substrate for a community-driven endeavour that evolves as capability matures

    A Human-Embodied Drone for Dexterous Aerial Manipulation

    Full text link
    Current drones perform a wide variety of tasks in surveillance, photography, agriculture, package delivery, etc. However, these tasks are performed passively without the use of human interaction. Aerial manipulation shifts this paradigm and implements drones with robotic arms that allow interaction with the environment rather than simply sensing it. For example, in construction, aerial manipulation in conjunction with human interaction could allow operators to perform several tasks, such as hosing decks, drill into surfaces, and sealing cracks via a drone. This integration with drones will henceforth be known as dexterous aerial manipulation. Our recent work integrated the worker’s experience into aerial manipulation using haptic technology. The net effect was such a system could enable the worker to leverage drones and complete tasks while utilizing haptics on the task site remotely. However, the tasks were completed within the operator’s line-of-sight. Until now, immersive AR/VR frameworks has rarely been integrated in aerial manipulation. Yet, such a framework allows the drones to embody and transport the operator’s senses, actions, and presence to a remote location in real-time. As a result, the operator can both physically interact with the environment and socially interact with actual workers on the worksite. This dissertation presents a human-embodied drone interface for dexterous aerial manipulation. Using VR/AR technology, the interface allows the operator to leverage their intelligence to collaboratively perform desired tasks anytime, anywhere with a drone that possesses great dexterity

    Aerial Manipulator Force Control Using Control Barrier Functions

    Full text link
    This article studies the problem of applying normal forces on a surface, using an underactuated aerial vehicle equipped with a dexterous robotic arm. A force-motion high-level controller is designed based on a Lyapunov function encompassing alignment and exerted force errors. This controller is coupled with a Control Barrier Function constraint under an optimization scheme using Quadratic Programming. This aims to enforce a prescribed relationship between the approaching motion for the end-effector and its alignment with the surface, thus ensuring safe operation. An adaptive low-level controller is devised for the aerial vehicle, capable of tracking velocity commands generated by the high-level controller. Simulations are presented to demonstrate the force exertion stability and safety of the controller in cases of large disturbances

    Collaborative and Cooperative Robotics Applications using Visual Perception

    Get PDF
    The objective of this Thesis is to develop novel integrated strategies for collaborative and cooperative robotic applications. Commonly, industrial robots operate in structured environments and in work-cell separated from human operators. Nowadays, collaborative robots have the capacity of sharing the workspace and collaborate with humans or other robots to perform complex tasks. These robots often operate in an unstructured environment, whereby they need sensors and algorithms to get information about environment changes. Advanced vision and control techniques have been analyzed to evaluate their performance and their applicability to industrial tasks. Then, some selected techniques have been applied for the first time to an industrial context. A Peg-in-Hole task has been chosen as first case study, since it has been extensively studied but still remains challenging: it requires accuracy both in the determination of the hole poses and in the robot positioning. Two solutions have been developed and tested. Experimental results have been discussed to highlight the advantages and disadvantages of each technique. Grasping partially known objects in unstructured environments is one of the most challenging issues in robotics. It is a complex task and requires to address multiple subproblems, in order to be accomplished, including object localization and grasp pose detection. Also for this class of issues some vision techniques have been analyzed. One of these has been adapted to be used in industrial scenarios. Moreover, as a second case study, a robot-to-robot object handover task in a partially structured environment and in the absence of explicit communication between the robots has been developed and validated. Finally, the two case studies have been integrated in two real industrial setups to demonstrate the applicability of the strategies to solving industrial problems
    corecore