2,506 research outputs found

    Occupational health and safety issues in human-robot collaboration: State of the art and open challenges

    Get PDF
    Human-Robot Collaboration (HRC) refers to the interaction of workers and robots in a shared workspace. Owing to the integration of the industrial automation strengths with the inimitable cognitive capabilities of humans, HRC is paramount to move towards advanced and sustainable production systems. Although the overall safety of collaborative robotics has increased over time, further research efforts are needed to allow humans to operate alongside robots, with awareness and trust. Numerous safety concerns are open, and either new or enhanced technical, procedural and organizational measures have to be investigated to design and implement inherently safe and ergonomic automation solutions, aligning the systems performance and the human safety. Therefore, a bibliometric analysis and a literature review are carried out in the present paper to provide a comprehensive overview of Occupational Health and Safety (OHS) issues in HRC. As a result, the most researched topics and application areas, and the possible future lines of research are identified. Reviewed articles stress the central role played by humans during collaboration, underlining the need to integrate the human factor in the hazard analysis and risk assessment. Human-centered design and cognitive engineering principles also require further investigations to increase the worker acceptance and trust during collaboration. Deepened studies are compulsory in the healthcare sector, to investigate the social and ethical implications of HRC. Whatever the application context is, the implementation of more and more advanced technologies is fundamental to overcome the current HRC safety concerns, designing low-risk HRC systems while ensuring the system productivity

    Teleoperation Methods for High-Risk, High-Latency Environments

    Get PDF
    In-Space Servicing, Assembly, and Manufacturing (ISAM) can enable larger-scale and longer-lived infrastructure projects in space, with interest ranging from commercial entities to the US government. Servicing, in particular, has the potential to vastly increase the usable lifetimes of satellites. However, the vast majority of spacecraft on low Earth orbit today were not designed to be serviced on-orbit. As such, several of the manipulations during servicing cannot easily be automated and instead require ground-based teleoperation. Ground-based teleoperation of on-orbit robots brings its own challenges of high latency communications, with telemetry delays of several seconds, and difficulties in visualizing the remote environment due to limited camera views. We explore teleoperation methods to alleviate these difficulties, increase task success, and reduce operator load. First, we investigate a model-based teleoperation interface intended to provide the benefits of direct teleoperation even in the presence of time delay. We evaluate the model-based teleoperation method using professional robot operators, then use feedback from that study to inform the design of a visual planning tool for this task, Interactive Planning and Supervised Execution (IPSE). We describe and evaluate the IPSE system and two interfaces, one 2D using a traditional mouse and keyboard and one 3D using an Intuitive Surgical da Vinci master console. We then describe and evaluate an alternative 3D interface using a Meta Quest head-mounted display. Finally, we describe an extension of IPSE to allow human-in-the-loop planning for a redundant robot. Overall, we find that IPSE improves task success rate and decreases operator workload compared to a conventional teleoperation interface

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    DoReMi: Grounding Language Model by Detecting and Recovering from Plan-Execution Misalignment

    Full text link
    Large language models encode a vast amount of semantic knowledge and possess remarkable understanding and reasoning capabilities. Previous research has explored how to ground language models in robotic tasks to ensure that the sequences generated by the language model are both logically correct and practically executable. However, low-level execution may deviate from the high-level plan due to environmental perturbations or imperfect controller design. In this paper, we propose DoReMi, a novel language model grounding framework that enables immediate Detection and Recovery from Misalignments between plan and execution. Specifically, LLMs are leveraged for both planning and generating constraints for planned steps. These constraints can indicate plan-execution misalignments and we use a vision question answering (VQA) model to check constraints during low-level skill execution. If certain misalignment occurs, our method will call the language model to re-plan in order to recover from misalignments. Experiments on various complex tasks including robot arms and humanoid robots demonstrate that our method can lead to higher task success rates and shorter task completion times. Videos of DoReMi are available at https://sites.google.com/view/doremi-paper.Comment: 21 pages, 13 figure

    Extension of the Control Concept for a Mobile Overhead Manipulator to Whole-Body Impedance Control

    Get PDF
    At present, robots constitute a central component of contemporary factories. The application of traditional ground-based systems, however, may lead to congested floors with minimal space left for new robots or human workers. Overhead manipulators, on the other hand, aim to occupy the unutilized ceiling space, in order to manipulate the workspace located below them. The SwarmRail system is an example of such an overhead manipulator. This concept deploys mobile units driving across a passive railstructure above the ground. Additionally, equipping the mobile units with robotic arms at their bottom side enables this design to provide continuous overhead manipulation while in motion. Although a first demonstrator confirmed the functional capability of said system, the current hardware suffers from complications while traversing rail crossings. Due to uneven rails consecutive rails, said crossing points cause the robot's wheels to collide with the new rail segment it is driving towards. Additionally, the robot experiences an undesired sudden altitude change. In this thesis, we aim to implement a hierarchical whole-body impedance tracking controller for the robots employed within the SwarmRail system. Our controller combines a kinematically controlled mobile unit with the impedance-based control of a robotic arm through an admittance interface. The focus of this thesis is set on the controller's robustness against the previously mentioned external disturbances. The performance of this controller is validated inside a simulation that incorporates the aforementioned complications. Our findings suggest, that the control strategy presented in this thesis provides a foundation for the development of a controller applicable to the physical demonstrator

    Using an HSV-based approach for detecting and grasping an object by the industrial manipulator system

    Get PDF
    In the context of the industrialization era, robots are gradually replacing workers in some production stages. There is an irreversible trend toward incorporating image processing techniques in the realm of robot control. In recent years, vision-based techniques have achieved significant milestones. However, most of these techniques require complex setups, specialized cameras, and skilled operators for burden computation. This paper presents an efficient vision-based solution for object detection and grasping in indoor environments. The framework of the system, encompassing geometrical constraints, robot control theories, and the hardware platform, is described. The proposed method, covering calibration to visual estimation, is detailed for handling the detection and grasping task. Our approach's efficiency, feasibility, and applicability are evident from the results of both theoretical simulations and experiments

    Safety-Aware Human-Robot Collaborative Transportation and Manipulation with Multiple MAVs

    Full text link
    Human-robot interaction will play an essential role in various industries and daily tasks, enabling robots to effectively collaborate with humans and reduce their physical workload. Most of the existing approaches for physical human-robot interaction focus on collaboration between a human and a single ground robot. In recent years, very little progress has been made in this research area when considering aerial robots, which offer increased versatility and mobility compared to their grounded counterparts. This paper proposes a novel approach for safe human-robot collaborative transportation and manipulation of a cable-suspended payload with multiple aerial robots. We leverage the proposed method to enable smooth and intuitive interaction between the transported objects and a human worker while considering safety constraints during operations by exploiting the redundancy of the internal transportation system. The key elements of our system are (a) a distributed payload external wrench estimator that does not rely on any force sensor; (b) a 6D admittance controller for human-aerial-robot collaborative transportation and manipulation; (c) a safety-aware controller that exploits the internal system redundancy to guarantee the execution of additional tasks devoted to preserving the human or robot safety without affecting the payload trajectory tracking or quality of interaction. We validate the approach through extensive simulation and real-world experiments. These include as well the robot team assisting the human in transporting and manipulating a load or the human helping the robot team navigate the environment. To the best of our knowledge, this work is the first to create an interactive and safety-aware approach for quadrotor teams that physically collaborate with a human operator during transportation and manipulation tasks.Comment: Guanrui Li and Xinyang Liu contributed equally to this pape

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    ABC: Adaptive, Biomimetic, Configurable Robots for Smart Farms - From Cereal Phenotyping to Soft Fruit Harvesting

    Get PDF
    Currently, numerous factors, such as demographics, migration patterns, and economics, are leading to the critical labour shortage in low-skilled and physically demanding parts of agriculture. Thus, robotics can be developed for the agricultural sector to address these shortages. This study aims to develop an adaptive, biomimetic, and configurable modular robotics architecture that can be applied to multiple tasks (e.g., phenotyping, cutting, and picking), various crop varieties (e.g., wheat, strawberry, and tomato) and growing conditions. These robotic solutions cover the entire perception–action–decision-making loop targeting the phenotyping of cereals and harvesting fruits in a natural environment. The primary contributions of this thesis are as follows. a) A high-throughput method for imaging field-grown wheat in three dimensions, along with an accompanying unsupervised measuring method for obtaining individual wheat spike data are presented. The unsupervised method analyses the 3D point cloud of each trial plot, containing hundreds of wheat spikes, and calculates the average size of the wheat spike and total spike volume per plot. Experimental results reveal that the proposed algorithm can effectively identify spikes from wheat crops and individual spikes. b) Unlike cereal, soft fruit is typically harvested by manual selection and picking. To enable robotic harvesting, the initial perception system uses conditional generative adversarial networks to identify ripe fruits using synthetic data. To determine whether the strawberry is surrounded by obstacles, a cluster complexity-based perception system is further developed to classify the harvesting complexity of ripe strawberries. c) Once the harvest-ready fruit is localised using point cloud data generated by a stereo camera, the platform’s action system can coordinate the arm to reach/cut the stem using the passive motion paradigm framework, as inspired by studies on neural control of movement in the brain. Results from field trials for strawberry detection, reaching/cutting the stem of the fruit with a mean error of less than 3 mm, and extension to analysing complex canopy structures/bimanual coordination (searching/picking) are presented. Although this thesis focuses on strawberry harvesting, ongoing research is heading toward adapting the architecture to other crops. The agricultural food industry remains a labour-intensive sector with a low margin, and cost- and time-efficiency business model. The concepts presented herein can serve as a reference for future agricultural robots that are adaptive, biomimetic, and configurable

    Design of autonomous robotic system for removal of porcupine crab spines

    Get PDF
    Among various types of crabs, the porcupine crab is recognized as a highly potential crab meat resource near the off-shore northwest Atlantic ocean. However, their long, sharp spines make it difficult to be manually handled. Despite the fact that automation technology is widely employed in the commercial seafood processing industry, manual processing methods still dominate in today’s crab processing, which causes low production rates and high manufacturing costs. This thesis proposes a novel robot-based porcupine crab spine removal method. Based on the 2D image and 3D point cloud data captured by the Microsoft Azure Kinect 3D RGB-D camera, the crab’s 3D point cloud model can be reconstructed by using the proposed point cloud processing method. After that, the novel point cloud slicing method and the 2D image and 3D point cloud combination methods are proposed to generate the robot spine removal trajectory. The 3D model of the crab with the actual dimension, robot working cell, and endeffector are well established in Solidworks [1] and imported into the Robot Operating System (ROS) [2] simulation environment for methodology validation and design optimization. The simulation results show that both the point cloud slicing method and the 2D and 3D combination methods can generate a smooth and feasible trajectory. Moreover, compared with the point cloud slicing method, the 2D and 3D combination method is more precise and efficient, which has been validated in the real experiment environment. The automated experiment platform, featuring a 3D-printed end-effector and crab model, has been successfully set up. Results from the experiments indicate that the crab model can be accurately reconstructed, and the central line equations of each spine were calculated to generate a spine removal trajectory. Upon execution with a real robot arm, all spines were removed successfully. This thesis demonstrates the proposed method’s capability to achieve expected results and its potential for application in various manufacturing processes such as painting, polishing, and deburring for parts of different shapes and materials
    • …
    corecore