1,054 research outputs found

    Real-time computation of distance to dynamic obstacles with multiple depth sensors

    Get PDF
    We present an efficient method to evaluate distances between dynamic obstacles and a number of points of interests (e.g., placed on the links of a robot) when using multiple depth cameras. A depth-space oriented discretization of the Cartesian space is introduced that represents at best the workspace monitored by a depth camera, including occluded points. A depth grid map can be initialized off line from the arrangement of the multiple depth cameras, and its peculiar search characteristics allows fusing on line the information given by the multiple sensors in a very simple and fast way. The real-time performance of the proposed approach is shown by means of collision avoidance experiments where two Kinect sensors monitor a human-robot coexistence task

    Human-Robot Collaboration: Safety by Design

    Get PDF
    High payload industrial robots, unlike collaborative robots are not designed to work together with humans. Collaboration can only happen in situations, where the human and robot is separated with a distance, which allows safety sensors to stop the robot system in any point if the human is in too close proximity of the robot. Safety sensors cannot decide over risks, consequences, neither any counter measures to prevent undesired outcome (e.g. collision between human and robot). Safety sensors are only reacting on proximity and can only give severity signal to the robotic system (e.g. no human, slow speed, full stop). This paper presents a new way to address safety sensors: voxel based, dynamic, collision state-space monitoring for human-robot collaboration with high payload robots. The general architecture and some initial test are presented, along with introduction of the problem statement.acceptedVersio

    Suitable task allocation in intelligent systems for assistive environments

    Get PDF
    The growing need of technological assistance to provide support to people with special needs demands for systems more and more efficient and with better performances. With this aim, this work tries to advance in a multirobot platform that allows the coordinated control of different agents and other elements in the environment to achieve an autonomous behavior based on the user’s needs or will. Therefore, this environment is structured according to the potentiality of each agent and elements of this environment and of the dynamic context, to generate the adequate actuation plans and the coordination of their execution.Peer ReviewedPostprint (author's final draft

    Flexible Supervised Autonomy for Exploration in Subterranean Environments

    Full text link
    While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.Comment: Field Robotics special issue: DARPA Subterranean Challenge, Advancement and Lessons Learned from the Final

    Dynamic mosaic planning for a robotic bin-packing system based on picked part and target box monitoring

    Get PDF
    This paper describes the dynamic mosaic planning method developed in the context of the PICKPLACE European project. The dynamic planner has allowed the development of a robotic system capable of packing a wide variety of objects without having to adjust to each reference. The mosaic planning system consists of three modules: First, the picked item monitoring module monitors the grabbed item to find out how the robot has picked it. At the same time, the destination container is monitored online to obtain the actual status of the packaging. To this end, we present a novel heuristic algorithm that, based on the point cloud of the scene, estimates the empty volume inside the container as empty maximal spaces (EMS). Finally, we present the development of the dynamic IK-PAL mosaic planner that allows us to dynamically estimate the optimal packing pose considering both the status of the picked part and the estimated EMSs. The developed method has been successfully integrated in a real robotic picking and packing system and validated with 7 tests of increasing complexity. In these tests, we demonstrate the flexibility of the presented system in handling a wide range of objects in a real dynamic packaging environment. To our knowledge, this is the first time that a complete online picking and packing system is deployed in a real robotic scenario allowing to create mosaics with arbitrary objects and to consider the dynamics of a real robotic packing system.This article has been funded by the European Union's Horizon 2020 research and Innovation Programme under grant agreement No. 780488, and the project "5R-Red Cervera de Tecnologias roboticas en fabricacion inteligente", contract number CER-20211007, under "Centros Tecnologicos de Excelencia Cervera" programme funded by "The Centre for the Development of Industrial Technology (CDTI)"

    Camera arrangement optimization for workspace monitoring in human-robot collaboration

    Get PDF
    Human-robot interaction is becoming an integral part of practice. There is a greater emphasis on safety in workplaces where a robot may bump into a worker. In practice, there are solutions that control the robot based on the potential energy in a collision or a robot re-planning the straight-line trajectory. However, a sensor system must be designed to detect obstacles across the human-robot shared workspace. So far, there is no procedure that engineers can follow in practice to deploy sensors ideally. We come up with the idea of classifying the space as an importance index, which determines what part of the workspace sensors should sense to ensure ideal obstacle sensing. Then, the ideal camera positions can be automatically found according to this classified map. Based on the experiment, the coverage of the important volume by the calculated camera position in the workspace was found to be on average 37% greater compared to a camera placed intuitively by test subjects. Using two cameras at the workplace, the calculated positions were 27% more effective than the subjects' camera positions. Furthermore, for three cameras, the calculated positions were 13% better than the subjects' camera positions, with a total coverage of more than 99% of the classified map.Web of Science231art. no. 29

    Cognitive Robotics in Industrial Environments

    Get PDF
    corecore