906 research outputs found

    Open World Assistive Grasping Using Laser Selection

    Full text link
    Many people with motor disabilities are unable to complete activities of daily living (ADLs) without assistance. This paper describes a complete robotic system developed to provide mobile grasping assistance for ADLs. The system is comprised of a robot arm from a Rethink Robotics Baxter robot mounted to an assistive mobility device, a control system for that arm, and a user interface with a variety of access methods for selecting desired objects. The system uses grasp detection to allow previously unseen objects to be picked up by the system. The grasp detection algorithms also allow for objects to be grasped in cluttered environments. We evaluate our system in a number of experiments on a large variety of objects. Overall, we achieve an object selection success rate of 88% and a grasp detection success rate of 90% in a non-mobile scenario, and success rates of 89% and 72% in a mobile scenario

    Intervention AUVs: The Next Challenge

    Get PDF
    While commercially available AUVs are routinely used in survey missions, a new set of applications exist which clearly demand intervention capabilities. The maintenance of: permanent underwater observatories, submerged oil wells, cabled sensor networks, pipes and the deployment and recovery of benthic stations are a few of them. These tasks are addressed nowadays using manned submersibles or work-class ROVs, equipped with teleoperated arms under human supervision. Although researchers have recently opened the door to future I-AUVs, a long path is still necessary to achieve autonomous underwater interventions. This paper reviews the evolution timeline in autonomous underwater intervention systems. Milestone projects in the state of the art are reviewed, highlighting their principal contributions to the field. To the best of the authors knowledge, only three vehicles have demonstrated some autonomous intervention capabilities so far: ALIVE, SAUVIM and GIRONA 500, being the last one the lightest one. In this paper GIRONA 500 I-AUV is presented and its software architecture discussed. Recent results in different scenarios are reported: 1) Valve turning and connector plugging/unplugging while docked to a subsea panel, 2) Free floating valve turning using learning by demonstration, and 3) Multipurpose free-floating object recovery. The paper ends discussing the lessons learned so far

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Design og styring av smarte robotsystemer for applikasjoner innen biovitenskap: biologisk prøvetaking og jordbærhøsting

    Get PDF
    This thesis aims to contribute knowledge to support fully automation in life-science applications, which includes design, development, control and integration of robotic systems for sample preparation and strawberry harvesting, and is divided into two parts. Part I shows the development of robotic systems for the preparation of fungal samples for Fourier transform infrared (FTIR) spectroscopy. The first step in this part developed a fully automated robot for homogenization of fungal samples using ultrasonication. The platform was constructed with a modified inexpensive 3D printer, equipped with a camera to distinguish sample wells and blank wells. Machine vision was also used to quantify the fungi homogenization process using model fitting, suggesting that homogeneity level to ultrasonication time can be well fitted with exponential decay equations. Moreover, a feedback control strategy was proposed that used the standard deviation of local homogeneity values to determine the ultrasonication termination time. The second step extended the first step to develop a fully automated robot for the whole process preparation of fungal samples for FTIR spectroscopy by adding a newly designed centrifuge and liquid-handling module for sample washing, concentration and spotting. The new system used machine vision with deep learning to identify the labware settings, which frees the users from inputting the labware information manually. Part II of the thesis deals with robotic strawberry harvesting. This part can be further divided into three stages. i) The first stage designed a novel cable-driven gripper with sensing capabilities, which has high tolerance to positional errors and can reduce picking time with a storage container. The gripper uses fingers to form a closed space that can open to capture a fruit and close to push the stem to the cutting area. Equipped with internal sensors, the gripper is able to control a robotic arm to correct for positional errors introduced by the vision system, improving the robustness. The gripper and a detection method based on color thresholding were integrated into a complete system for strawberry harvesting. ii) The second stage introduced the improvements and updates to the first stage where the main focus was to address the challenges in unstructured environment by introducing a light-adaptive color thresholding method for vision and a novel obstacle-separation algorithm for manipulation. At this stage, the new fully integrated strawberry-harvesting system with dual-manipulator was capable of picking strawberries continuously in polytunnels. The main scientific contribution of this stage is the novel obstacle-separation path-planning algorithm, which is fundamentally different from traditional path planning where obstacles are typically avoided. The algorithm uses the gripper to push aside surrounding obstacles from an entrance, thus clearing the way for it to swallow the target strawberry. Improvements were also made to the gripper, the arm, and the control. iii) The third stage improved the obstacle-separation method by introducing a zig-zag push for both horizontal and upward directions and a novel dragging operation to separate upper obstacles from the target. The zig-zag push can help the gripper capture a target since the generated shaking motion can break the static contact force between the target and obstacles. The dragging operation is able to address the issue of mis-capturing obstacles located above the target, in which the gripper drags the target to a place with fewer obstacles and then pushes back to move the obstacles aside for further detachment. The separation paths are determined by the number and distribution of obstacles based on the downsampled point cloud in the region of interest.Denne avhandlingen tar sikte på å bidra med kunnskap om automatisering og robotisering av applikasjoner innen livsvitenskap. Avhandlingen er todelt, og tar for seg design, utvikling, styring og integrering av robotsystemer for prøvetaking og jordbærhøsting. Del I omhandler utvikling av robotsystemer til bruk under forberedelse av sopprøver for Fourier-transform infrarød (FTIR) spektroskopi. I første stadium av denne delen ble det utviklet en helautomatisert robot for homogenisering av sopprøver ved bruk av ultralyd-sonikering. Plattformen ble konstruert ved å modifisere en billig 3D-printer og utstyre den med et kamera for å kunne skille prøvebrønner fra kontrollbrønner. Maskinsyn ble også tatt i bruk for å estimere soppens homogeniseringsprosess ved hjelp av matematisk modellering, noe som viste at homogenitetsnivået faller eksponensielt med tiden. Videre ble det foreslått en strategi for regulering i lukker sløyfe som brukte standardavviket for lokale homogenitetsverdier til å bestemme avslutningstidspunkt for sonikeringen. I neste stadium ble den første plattformen videreutviklet til en helautomatisert robot for hele prosessen som forbereder prøver av sopprøver for FTIR-spektroskopi. Dette ble gjort ved å legge til en nyutviklet sentrifuge- og væskehåndteringsmodul for vasking, konsentrering og spotting av prøver. Det nye systemet brukte maskinsyn med dyp læring for å identifisere innstillingene for laboratorieutstyr, noe som gjør at brukerne slipper å registrere innstillingene manuelt.Norwegian University of Life SciencespublishedVersio

    Goal-Directed Reasoning and Cooperation in Robots in Shared Workspaces: an Internal Simulation Based Neural Framework

    Get PDF
    From social dining in households to product assembly in manufacturing lines, goal-directed reasoning and cooperation with other agents in shared workspaces is a ubiquitous aspect of our day-to-day activities. Critical for such behaviours is the ability to spontaneously anticipate what is doable by oneself as well as the interacting partner based on the evolving environmental context and thereby exploit such information to engage in goal-oriented action sequences. In the setting of an industrial task where two robots are jointly assembling objects in a shared workspace, we describe a bioinspired neural architecture for goal-directed action planning based on coupled interactions between multiple internal models, primarily of the robot’s body and its peripersonal space. The internal models (of each robot’s body and peripersonal space) are learnt jointly through a process of sensorimotor exploration and then employed in a range of anticipations related to the feasibility and consequence of potential actions of two industrial robots in the context of a joint goal. The ensuing behaviours are demonstrated in a real-world industrial scenario where two robots are assembling industrial fuse-boxes from multiple constituent objects (fuses, fuse-stands) scattered randomly in their workspace. In a spatially unstructured and temporally evolving assembly scenario, the robots employ reward-based dynamics to plan and anticipate which objects to act on at what time instances so as to successfully complete as many assemblies as possible. The existing spatial setting fundamentally necessitates planning collision-free trajectories and avoiding potential collisions between the robots. Furthermore, an interesting scenario where the assembly goal is not realizable by either of the robots individually but only realizable if they meaningfully cooperate is used to demonstrate the interplay between perception, simulation of multiple internal models and the resulting complementary goal-directed actions of both robots. Finally, the proposed neural framework is benchmarked against a typically engineered solution to evaluate its performance in the assembly task. The framework provides a computational outlook to the emerging results from neurosciences related to the learning and use of body schema and peripersonal space for embodied simulation of action and prediction. While experiments reported here engage the architecture in a complex planning task specifically, the internal model based framework is domain-agnostic facilitating portability to several other tasks and platforms

    Visual Perception System for Aerial Manipulation: Methods and Implementations

    Get PDF
    La tecnología se evoluciona a gran velocidad y los sistemas autónomos están empezado a ser una realidad. Las compañías están demandando, cada vez más, soluciones robotizadas para mejorar la eficiencia de sus operaciones. Este también es el caso de los robots aéreos. Su capacidad única de moverse libremente por el aire los hace excelentes para muchas tareas que son tediosas o incluso peligrosas para operadores humanos. Hoy en día, la gran cantidad de sensores y drones comerciales los hace soluciones muy tentadoras. Sin embargo, todavía se requieren grandes esfuerzos de obra humana para customizarlos para cada tarea debido a la gran cantidad de posibles entornos, robots y misiones. Los investigadores diseñan diferentes algoritmos de visión, hardware y sensores para afrontar las diferentes tareas. Actualmente, el campo de la robótica manipuladora aérea está emergiendo con el objetivo de extender la cantidad de aplicaciones que estos pueden realizar. Estas pueden ser entre otras, inspección, mantenimiento o incluso operar válvulas u otras máquinas. Esta tesis presenta un sistema de manipulación aérea y un conjunto de algoritmos de percepción para la automatización de las tareas de manipulación aérea. El diseño completo del sistema es presentado y una serie de frameworks son presentados para facilitar el desarrollo de este tipo de operaciones. En primer lugar, la investigación relacionada con el análisis de objetos para manipulación y planificación de agarre considerando diferentes modelos de objetos es presentado. Dependiendo de estos modelos de objeto, se muestran diferentes algoritmos actuales de análisis de agarre y algoritmos de planificación para manipuladores simples y manipuladores duales. En Segundo lugar, el desarrollo de algoritmos de percepción para detección de objetos y estimación de su posicione es presentado. Estos permiten al sistema identificar objetos de cualquier tipo en cualquier escena para localizarlos para efectuar las tareas de manipulación. Estos algoritmos calculan la información necesaria para los análisis de manipulación descritos anteriormente. En tercer lugar. Se presentan algoritmos de visión para localizar el robot en el entorno al mismo tiempo que se elabora un mapa local, el cual es beneficioso para las tareas de manipulación. Estos mapas se enriquecen con información semántica obtenida en los algoritmos de detección. Por último, se presenta el desarrollo del hardware relacionado con la plataforma aérea, el cual incluye unos manipuladores de bajo peso y la invención de una herramienta para realizar tareas de contacto con superficies rígidas que sirve de estimador de la posición del robot. Todas las técnicas presentadas en esta tesis han sido validadas con extensiva experimentación en plataformas reales.Technology is growing fast, and autonomous systems are becoming a reality. Companies are increasingly demanding robotized solutions to improve the efficiency of their operations. It is also the case for aerial robots. Their unique capability of moving freely in the space makes them suitable for many tasks that are tedious and even dangerous for human operators. Nowadays, the vast amount of sensors and commercial drones makes them highly appealing. However, it is still required a strong manual effort to customize the existing solutions to each particular task due to the number of possible environments, robot designs and missions. Different vision algorithms, hardware devices and sensor setups are usually designed by researchers to tackle specific tasks. Currently, aerial manipulation is being intensively studied to allow aerial robots to extend the number of applications. These could be inspection, maintenance, or even operating valves or other machines. This thesis presents an aerial manipulation system and a set of perception algorithms for the automation aerial manipulation tasks. The complete design of the system is presented and modular frameworks are shown to facilitate the development of these kind of operations. At first, the research about object analysis for manipulation and grasp planning considering different object models is presented. Depend on the model of the objects, different state of art grasping analysis are reviewed and planning algorithms for both single and dual manipulators are shown. Secondly, the development of perception algorithms for object detection and pose estimation are presented. They allows the system to identify many kind of objects in any scene and locate them to perform manipulation tasks. These algorithms produce the necessary information for the manipulation analysis described in the previous paragraph. Thirdly, it is presented how to use vision to localize the robot in the environment. At the same time, local maps are created which can be beneficial for the manipulation tasks. These maps are are enhanced with semantic information from the perception algorithm mentioned above. At last, the thesis presents the development of the hardware of the aerial platform which includes the lightweight manipulators and the invention of a novel tool that allows the aerial robot to operate in contact with static objects. All the techniques presented in this thesis have been validated throughout extensive experimentation with real aerial robotic platforms
    corecore