127 research outputs found

    Human-robot coexistence and interaction in open industrial cells

    Get PDF
    Recent research results on human\u2013robot interaction and collaborative robotics are leaving behind the traditional paradigm of robots living in a separated space inside safety cages, allowing humans and robot to work together for completing an increasing number of complex industrial tasks. In this context, safety of the human operator is a main concern. In this paper, we present a framework for ensuring human safety in a robotic cell that allows human\u2013robot coexistence and dependable interaction. The framework is based on a layered control architecture that exploits an effective algorithm for online monitoring of relative human\u2013robot distance using depth sensors. This method allows to modify in real time the robot behavior depending on the user position, without limiting the operative robot workspace in a too conservative way. In order to guarantee redundancy and diversity at the safety level, additional certified laser scanners monitor human\u2013robot proximity in the cell and safe communication protocols and logical units are used for the smooth integration with an industrial software for safe low-level robot control. The implemented concept includes a smart human-machine interface to support in-process collaborative activities and for a contactless interaction with gesture recognition of operator commands. Coexistence and interaction are illustrated and tested in an industrial cell, in which a robot moves a tool that measures the quality of a polished metallic part while the operator performs a close evaluation of the same workpiece

    A Rapidly Reconfigurable Robotics Workcell and Its Applictions for Tissue Engineering

    Get PDF
    This article describes the development of a component-based technology robot system that can be rapidly configured to perform a specific manufacturing task. The system is conceived with standard and inter-operable components including actuator modules, rigid link connectors and tools that can be assembled into robots with arbitrary geometry and degrees of freedom. The reconfigurable "plug-and-play" robot kinematic and dynamic modeling algorithms are developed. These algorithms are the basis for the control and simulation of reconfigurable robots. The concept of robot configuration optimization is introduced for the effective use of the rapidly reconfigurable robots. Control and communications of the workcell components are facilitated by a workcell-wide TCP/IP network and device level CAN-bus networks. An object-oriented simulation and visualization software for the reconfigurable robot is developed based on Windows NT. Prototypes of the robot systems configured to perform 3D contour following task and the positioning task are constructed and demonstrated. Applications of such systems for biomedical tissue scaffold fabrication are considered.Singapore-MIT Alliance (SMA

    Automatic Modeling for Modular Reconfigurable Robotic Systems: Theory and Practice

    Get PDF
    A modular reconfigurable robot consists of a collection of individual link and joint components that can be assembled into a number of different robot ge-ometries. Compared to a conventional industrial robot with fixed geometry, such a system can provide flexibility to the user to cope with a wide spectru

    Worker-robot cooperation and integration into the manufacturing workcell via the holonic control architecture

    Get PDF
    Cooperative manufacturing is a new field of research, which addresses new challenges beyond the physical safety of the worker. Those new challenges appear due to the need to connect the worker and the cobot from the informatics point of view in one cooperative workcell. This requires developing an appropriate manufacturing control system, which fits the nature of both the worker and the cobot. Furthermore, the manufacturing control system must be able to understand the production variations, to guide the cooperation between worker and the cobot and adapt with the production variations.Die kooperative Fertigung ist ein neues Forschungsgebiet, das sich neuen Herausforderungen stellt. Diese neuen Herausforderungen ergeben sich aus der Notwendigkeit, den Arbeiter und den Cobot aus der Sicht der Informatik in einem kooperativen Arbeitsplatz zu verbinden. Dies erfordert die Entwicklung eines geeigneten Produktionskontrollsystems, das sowohl der Natur des Arbeiters als auch der des Cobots entspricht. Darüber hinaus muss die Fertigungssteuerung in der Lage sein, die Produktionsschwankungen zu verstehen, um die Zusammenarbeit zwischen Arbeiter und Cobot zu steuern

    Robotics Technology Crosscutting Program. Technology summary

    Full text link

    Vision-enhanced Peg-in-Hole for automotive body parts using semantic image segmentation and object detection

    Get PDF
    Artificial Intelligence (AI) is an enabling technology in the context of Industry 4.0. In particular, the automotive sector is among those who can benefit most of the use of AI in conjunction with advanced vision techniques. The scope of this work is to integrate deep learning algorithms in an industrial scenario involving a robotic Peg-in-Hole task. More in detail, we focus on a scenario where a human operator manually positions a carbon fiber automotive part in the workspace of a 7 Degrees of Freedom (DOF) manipulator. To cope with the uncertainty on the relative position between the robot and the workpiece, we adopt a three stage strategy. The first stage concerns the Three-Dimensional (3D) reconstruction of the workpiece using a registration algorithm based on the Iterative Closest Point (ICP) paradigm. Such a procedure is integrated with a semantic image segmentation neural network, which is in charge of removing the background of the scene to improve the registration. The adoption of such network allows to reduce the registration time of about 28.8%. In the second stage, the reconstructed surface is compared with a Computer Aided Design (CAD) model of the workpiece to locate the holes and their axes. In this stage, the adoption of a Convolutional Neural Network (CNN) allows to improve the holes’ position estimation of about 57.3%. The third stage concerns the insertion of the peg by implementing a search phase to handle the remaining estimation errors. Also in this case, the use of the CNN reduces the search phase duration of about 71.3%. Quantitative experiments, including a comparison with a previous approach without both the segmentation network and the CNN, have been conducted in a realistic scenario. The results show the effectiveness of the proposed approach and how the integration of AI techniques improves the success rate from 84.5% to 99.0%

    Human Management of the Hierarchical System for the Control of Multiple Mobile Robots

    Get PDF
    In order to take advantage of autonomous robotic systems, and yet ensure successful completion of all feasible tasks, we propose a mediation hierarchy in which an operator can interact at all system levels. Robotic systems are not robust in handling un-modeled events. Reactive behaviors may be able to guide the robot back into a modeled state and to continue. Reasoning systems may simply fail. Once a system has failed it is difficult to re-start the task from the failed state. Rather, the rule base is revised, programs altered, and the task re-tried from the beginning

    An advanced telerobotic system for shuttle payload changeout room processing applications

    Get PDF
    To potentially alleviate the inherent difficulties in the ground processing of the Space Shuttle and its associated payloads, a teleoperated, semi-autonomous robotic processing system for the Payload Changeout Room (PCR) is now in the conceptual stages. The complete PCR robotic system as currently conceived is described and critical design issues and the required technologies are discussed

    The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    Get PDF
    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution

    Towards the development of safe, collaborative robotic freehand ultrasound

    Get PDF
    The use of robotics in medicine is of growing importance for modern health services, as robotic systems have the capacity to improve upon human tasks, thereby enhancing the treatment ability of a healthcare provider. In the medical sector, ultrasound imaging is an inexpensive approach without the high radiation emissions often associated with other modalities, especially when compared to MRI and CT imaging respectively. Over the past two decades, considerable effort has been invested into freehand ultrasound robotics research and development. However, this research has focused on the feasibility of the application, not the robotic fundamentals, such as motion control, calibration, and contextual awareness. Instead, much of the work is concentrated on custom designed robots, ultrasound image generation and visual servoing, or teleoperation. Research based on these topics often suffer from important limitations that impede their use in an adaptable, scalable, and real-world manner. Particularly, while custom robots may be designed for a specific application, commercial collaborative robots are a more robust and economical solution. Otherwise, various robotic ultrasound studies have shown the feasibility of using basic force control, but rarely explore controller tuning in the context of patient safety and deformable skin in an unstructured environment. Moreover, many studies evaluate novel visual servoing approaches, but do not consider the practicality of relying on external measurement devices for motion control. These studies neglect the importance of robot accuracy and calibration, which allow a system to safely navigate its environment while reducing the imaging errors associated with positioning. Hence, while the feasibility of robotic ultrasound has been the focal point in previous studies, there is a lack of attention to what occurs between system design and image output. This thesis addresses limitations of the current literature through three distinct contributions. Given the force-controlled nature of an ultrasound robot, the first contribution presents a closed-loop calibration approach using impedance control and low-cost equipment. Accuracy is a fundamental requirement for high-quality ultrasound image generation and targeting. This is especially true when following a specified path along a patient or synthesizing 2D slices into a 3D ultrasound image. However, even though most industrial robots are inherently precise, they are not necessarily accurate. While robot calibration itself has been extensively studied, many of the approaches rely on expensive and highly delicate equipment. Experimental testing showed that this method is comparable in quality to traditional calibration using a laser tracker. As demonstrated through an experimental study and validated with a laser tracker, the absolute accuracy of a collaborative robot was improved to a maximum error of 0.990mm, representing a 58.4% improvement when compared to the nominal model. The second contribution explores collisions and contact events, as they are a natural by-product of applications involving physical human-robot interaction (pHRI) in unstructured environments. Robot-assisted medical ultrasound is an example of a task where simply stopping the robot upon contact detection may not be an appropriate reaction strategy. Thus, the robot should have an awareness of body contact location to properly plan force-controlled trajectories along the human body using the imaging probe. This is especially true for remote ultrasound systems where safety and manipulability are important elements to consider when operating a remote medical system through a communication network. A framework is proposed for robot contact classification using the built-in sensor data of a collaborative robot. Unlike previous studies, this classification does not discern between intended vs. unintended contact scenarios, but rather classifies what was involved in the contact event. The classifier can discern different ISO/TS 15066:2016 specific body areas along a human-model leg with 89.37% accuracy. Altogether, this contact distinction framework allows for more complex reaction strategies and tailored robot behaviour during pHRI. Lastly, given that the success of an ultrasound task depends on the capability of the robot system to handle pHRI, pure motion control is insufficient. Force control techniques are necessary to achieve effective and adaptable behaviour of a robotic system in the unstructured ultrasound environment while also ensuring safe pHRI. While force control does not require explicit knowledge of the environment, to achieve an acceptable dynamic behaviour, the control parameters must be tuned. The third contribution proposes a simple and effective online tuning framework for force-based robotic freehand ultrasound motion control. Within the context of medical ultrasound, different human body locations have a different stiffness and will require unique tunings. Through real-world experiments with a collaborative robot, the framework tuned motion control for optimal and safe trajectories along a human leg phantom. The optimization process was able to successfully reduce the mean absolute error (MAE) of the motion contact force to 0.537N through the evolution of eight motion control parameters. Furthermore, contextual awareness through motion classification can offer a framework for pHRI optimization and safety through predictive motion behaviour with a future goal of autonomous pHRI. As such, a classification pipeline, trained using the tuning process motion data, was able to reliably classify the future force tracking quality of a motion session with an accuracy of 91.82 %
    • …
    corecore