33 research outputs found

    Implementation and Evaluation of Networked Model Predictive Control System on Universal Robot

    Full text link
    Networked control systems are closed-loop feedback control systems containing system components that may be distributed geographically in different locations and interconnected via a communication network such as the Internet. The quality of network communication is a crucial factor that significantly affects the performance of remote control. This is due to the fact that network uncertainties can occur in the transmission of packets in the forward and backward channels of the system. The two most significant among these uncertainties are network time delay and packet loss. To overcome these challenges, the networked predictive control system has been proposed to provide improved performance and robustness using predictive controllers and compensation strategies. In particular, the model predictive control method is well-suited as an advanced approach compared to conventional methods. In this paper, a networked model predictive control system consisting of a model predictive control method and compensation strategies is implemented to control and stabilize a robot arm as a physical system. In particular, this work aims to analyze the performance of the system under the influence of network time delay and packet loss. Using appropriate performance and robustness metrics, an in-depth investigation of the impacts of these network uncertainties is performed. Furthermore, the forward and backward channels of the network are examined in detail in this study

    Kinematics and Robot Design II (KaRD2019) and III (KaRD2020)

    Get PDF
    This volume collects papers published in two Special Issues “Kinematics and Robot Design II, KaRD2019” (https://www.mdpi.com/journal/robotics/special_issues/KRD2019) and “Kinematics and Robot Design III, KaRD2020” (https://www.mdpi.com/journal/robotics/special_issues/KaRD2020), which are the second and third issues of the KaRD Special Issue series hosted by the open access journal robotics.The KaRD series is an open environment where researchers present their works and discuss all topics focused on the many aspects that involve kinematics in the design of robotic/automatic systems. It aims at being an established reference for researchers in the field as other serial international conferences/publications are. Even though the KaRD series publishes one Special Issue per year, all the received papers are peer-reviewed as soon as they are submitted and, if accepted, they are immediately published in MDPI Robotics. Kinematics is so intimately related to the design of robotic/automatic systems that the admitted topics of the KaRD series practically cover all the subjects normally present in well-established international conferences on “mechanisms and robotics”.KaRD2019 together with KaRD2020 received 22 papers and, after the peer-review process, accepted only 17 papers. The accepted papers cover problems related to theoretical/computational kinematics, to biomedical engineering and to other design/applicative aspects

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Programming Robots by Demonstration using Augmented Reality

    Get PDF
    O mundo está a viver a quarta revolução industrial, a Indústria 4.0; marcada pela crescente inteligência e automação dos sistemas industriais. No entanto, existem tarefas que são muito complexas ou caras para serem totalmente automatizadas, seria mais eficiente se a máquina pudesse trabalhar com o ser humano, não apenas partilhando o mesmo espaço de trabalho, mas como colaboradores úteis. O foco da investigação para solucionar esse problema está em sistemas de interação homem-robô, percebendo em que aplicações podem ser úteis para implementar e quais são os desafios que enfrentam. Neste contexto, uma melhor interação entre as máquinas e os operadores pode levar a múltiplos benefícios, como menos, melhor e mais fácil treino, um ambiente mais seguro para o operador e a capacidade de resolver problemas mais rapidamente. O tema desta dissertação é relevante na medida em que é necessário aprender e implementar as tecnologias que mais contribuem para encontrar soluções para um trabalho mais simples e eficiente na indústria. Assim, é proposto o desenvolvimento de um protótipo industrial de um sistema de interação homem-máquina através de Realidade Estendida, no qual o objetivo é habilitar um operador industrial sem experiência em programação, a programar um robô colaborativo utilizando o Microsoft HoloLens 2. O sistema desenvolvido é dividido em duas partes distintas: o sistema de tracking, que regista o movimento das mãos do operador, e o sistema de tradução da programação por demonstração, que constrói o programa a ser enviado ao robô para que ele se mova. O sistema de monitorização e supervisão é executado pelo Microsoft HoloLens 2, utilizando a plataforma Unity e Visual Studio para programá-lo. A base do sistema de programação por demonstração foi desenvolvida em Robot Operating System (ROS). Os robôs incluídos nesta interface são Universal Robots UR5 (robô colaborativo) e ABB IRB 2600 (robô industrial). Adicionalmente, a interface foi construída para incorporar facilmente mais robôs.The world is living the fourth industrial revolution, Industry 4.0; marked by the increasing intelligence and automation of manufacturing systems. Nevertheless, there are types of tasks that are too complex or too expensive to be fully automated, it would be more efficient if the machine were able to work with the human, not only by sharing the same workspace but also as useful collaborators. A possible solution to that problem is on human-robot interactions systems, understanding the applications where they can be helpful to implement and what are the challenges they face. In this context a better interaction between the machines and the operators can lead to multiples benefits, like less, better, and easier training, a safer environment for the operator and the capacity to solve problems quicker. The focus of this dissertation is relevant as it is necessary to learn and implement the technologies which most contribute to find solutions for a simpler and more efficient work in industry. This dissertation proposes the development of an industrial prototype of a human machine interaction system through Extended Reality (XR), in which the objective is to enable an industrial operator without any programming experience to program a collaborative robot using the Microsoft HoloLens 2. The system itself is divided into two different parts: the tracking system, which records the operator's hand movement, and the translator of the programming by demonstration system, which builds the program to be sent to the robot to execute the task. The monitoring and supervision system is executed by the Microsoft HoloLens 2, using the Unity platform and Visual Studio to program it. The programming by demonstration system's core was developed in Robot Operating System (ROS). The robots included in this interface are Universal Robots UR5 (collaborative robot) and ABB IRB 2600 (industrial robot). Moreover, the interface was built to easily add other robots

    Towards the development of safe, collaborative robotic freehand ultrasound

    Get PDF
    The use of robotics in medicine is of growing importance for modern health services, as robotic systems have the capacity to improve upon human tasks, thereby enhancing the treatment ability of a healthcare provider. In the medical sector, ultrasound imaging is an inexpensive approach without the high radiation emissions often associated with other modalities, especially when compared to MRI and CT imaging respectively. Over the past two decades, considerable effort has been invested into freehand ultrasound robotics research and development. However, this research has focused on the feasibility of the application, not the robotic fundamentals, such as motion control, calibration, and contextual awareness. Instead, much of the work is concentrated on custom designed robots, ultrasound image generation and visual servoing, or teleoperation. Research based on these topics often suffer from important limitations that impede their use in an adaptable, scalable, and real-world manner. Particularly, while custom robots may be designed for a specific application, commercial collaborative robots are a more robust and economical solution. Otherwise, various robotic ultrasound studies have shown the feasibility of using basic force control, but rarely explore controller tuning in the context of patient safety and deformable skin in an unstructured environment. Moreover, many studies evaluate novel visual servoing approaches, but do not consider the practicality of relying on external measurement devices for motion control. These studies neglect the importance of robot accuracy and calibration, which allow a system to safely navigate its environment while reducing the imaging errors associated with positioning. Hence, while the feasibility of robotic ultrasound has been the focal point in previous studies, there is a lack of attention to what occurs between system design and image output. This thesis addresses limitations of the current literature through three distinct contributions. Given the force-controlled nature of an ultrasound robot, the first contribution presents a closed-loop calibration approach using impedance control and low-cost equipment. Accuracy is a fundamental requirement for high-quality ultrasound image generation and targeting. This is especially true when following a specified path along a patient or synthesizing 2D slices into a 3D ultrasound image. However, even though most industrial robots are inherently precise, they are not necessarily accurate. While robot calibration itself has been extensively studied, many of the approaches rely on expensive and highly delicate equipment. Experimental testing showed that this method is comparable in quality to traditional calibration using a laser tracker. As demonstrated through an experimental study and validated with a laser tracker, the absolute accuracy of a collaborative robot was improved to a maximum error of 0.990mm, representing a 58.4% improvement when compared to the nominal model. The second contribution explores collisions and contact events, as they are a natural by-product of applications involving physical human-robot interaction (pHRI) in unstructured environments. Robot-assisted medical ultrasound is an example of a task where simply stopping the robot upon contact detection may not be an appropriate reaction strategy. Thus, the robot should have an awareness of body contact location to properly plan force-controlled trajectories along the human body using the imaging probe. This is especially true for remote ultrasound systems where safety and manipulability are important elements to consider when operating a remote medical system through a communication network. A framework is proposed for robot contact classification using the built-in sensor data of a collaborative robot. Unlike previous studies, this classification does not discern between intended vs. unintended contact scenarios, but rather classifies what was involved in the contact event. The classifier can discern different ISO/TS 15066:2016 specific body areas along a human-model leg with 89.37% accuracy. Altogether, this contact distinction framework allows for more complex reaction strategies and tailored robot behaviour during pHRI. Lastly, given that the success of an ultrasound task depends on the capability of the robot system to handle pHRI, pure motion control is insufficient. Force control techniques are necessary to achieve effective and adaptable behaviour of a robotic system in the unstructured ultrasound environment while also ensuring safe pHRI. While force control does not require explicit knowledge of the environment, to achieve an acceptable dynamic behaviour, the control parameters must be tuned. The third contribution proposes a simple and effective online tuning framework for force-based robotic freehand ultrasound motion control. Within the context of medical ultrasound, different human body locations have a different stiffness and will require unique tunings. Through real-world experiments with a collaborative robot, the framework tuned motion control for optimal and safe trajectories along a human leg phantom. The optimization process was able to successfully reduce the mean absolute error (MAE) of the motion contact force to 0.537N through the evolution of eight motion control parameters. Furthermore, contextual awareness through motion classification can offer a framework for pHRI optimization and safety through predictive motion behaviour with a future goal of autonomous pHRI. As such, a classification pipeline, trained using the tuning process motion data, was able to reliably classify the future force tracking quality of a motion session with an accuracy of 91.82 %

    Contact force and torque estimation for collaborative manipulators based on an adaptive Kalman filter with variable time period.

    Get PDF
    Contact force and torque sensing approaches enable manipulators to cooperate with humans and to interact appropriately with unexpected collisions. In this thesis, various moving averages are investigated and Weighted Moving Averages and Hull Moving Average are employed to generate a mode-switching moving average to support force sensing. The proposed moving averages with variable time period were used to reduce the effects of measured motor current noise and thus provide improved confidence in joint output torque estimation. The time period of the filter adapts continuously to achieve an optimal trade-off between response time and precision of estimation in real-time. An adaptive Kalman filter that consists of the proposed moving averages and the conventional Kalman filter is proposed. Calibration routines for the adaptive Kalman filter interpret the measured motor current noise and errors in the speed data from the individual joints into. The combination of the proposed adaptive Kalman filter with variable time period and its calibration method facilitates force and torque estimation without direct measurement via force/torque sensors. Contact force/torque sensing and response time assessments from the proposed approach are performed on both the single Universal Robot 5 manipulator and the collaborative UR5 arrangement (dual-arm robot) with differing unexpected end effector loads. The combined force and torque sensing method leads to a reduction of the estimation errors and response time in comparison with the pioneering method (55.2% and 20.8 %, respectively), and the positive performance of the proposed approach is further improved as the payload rises. The proposed method can potentially be applied to any robotic manipulators as long as the motor information (current, joint position, and joint velocities) are available. Consequently the cost of implementation will be significantly lower than methods that require load cells

    Soft Biomimetic Finger with Tactile Sensing and Sensory Feedback Capabilities

    Get PDF
    The compliant nature of soft fingers allows for safe and dexterous manipulation of objects by humans in an unstructured environment. A soft prosthetic finger design with tactile sensing capabilities for texture discrimination and subsequent sensory stimulation has the potential to create a more natural experience for an amputee. In this work, a pneumatically actuated soft biomimetic finger is integrated with a textile neuromorphic tactile sensor array for a texture discrimination task. The tactile sensor outputs were converted into neuromorphic spike trains, which emulate the firing pattern of biological mechanoreceptors. Spike-based features from each taxel compressed the information and were then used as inputs for the support vector machine (SVM) classifier to differentiate the textures. Our soft biomimetic finger with neuromorphic encoding was able to achieve an average overall classification accuracy of 99.57% over sixteen independent parameters when tested on thirteen standardized textured surfaces. The sixteen parameters were the combination of four angles of flexion of the soft finger and four speeds of palpation. To aid in the perception of more natural objects and their manipulation, subjects were provided with transcutaneous electrical nerve stimulation (TENS) to convey a subset of four textures with varied textural information. Three able-bodied subjects successfully distinguished two or three textures with the applied stimuli. This work paves the way for a more human-like prosthesis through a soft biomimetic finger with texture discrimination capabilities using neuromorphic techniques that provides sensory feedback; furthermore, texture feedback has the potential to enhance the user experience when interacting with their surroundings. Additionally, this work showed that an inexpensive, soft biomimetic finger combined with a flexible tactile sensor array can potentially help users perceive their environment better

    An Application of Modified T2FHC Algorithm in Two-Link Robot Controller

    Get PDF
    Parallel robotic systems have shown their advantages over the traditional serial robots such as high payload capacity, high speed, and high precision. Their applications are widespread from transportation to manufacturing fields. Therefore, most of the recent studies in parallel robots focus on finding the best method to improve the system accuracy. Enhancing this metric, however, is still the biggest challenge in controlling a parallel robot owing to the complex mathematical model of the system. In this paper, we present a novel solution to this problem with a Type 2 Fuzzy Coherent Controller Network (T2FHC), which is composed of a Type 2 Cerebellar Model Coupling Controller (CMAC) with its fast convergence ability and a Brain Emotional Learning Controller (BELC) using the Lyaponov-based weight updating rule. In addition, the T2FHC is combined with a surface generator to increase the system flexibility. To evaluate its applicability in real life, the proposed controller was tested on a Quanser 2-DOF robot system in three case studies: no load, 180 g load and 360 g load, respectively. The results showed that the proposed structure achieved superior performance compared to those of available algorithms such as CMAC and Novel Self-Organizing Fuzzy CMAC (NSOF CMAC). The Root Mean Square Error (RMSE) index of the system that was 2.20E-06 for angle A and 2.26E-06 for angle B and the tracking error that was -6.42E-04 for angle A and 2.27E-04 for angle B demonstrate the good stability and high accuracy of the proposed T2FHC. With this outstanding achievement, the proposed method is promising to be applied to many applications using nonlinear systems

    Behavior-specific proprioception models for robotic force estimation: a machine learning approach

    Get PDF
    Robots that support humans in physically demanding tasks require accurate force sensing capabilities. A common way to achieve this is by monitoring the interaction with the environment directly with dedicated force sensors. Major drawbacks of such special purpose sensors are the increased costs and the reduced payload of the robot platform. Instead, this thesis investigates how the functionality of such sensors can be approximated by utilizing force estimation approaches. Most of today’s robots are equipped with rich proprioceptive sensing capabilities where even a robotic arm, e.g., the UR5, provides access to more than hundred sensor readings. Following this trend, it is getting feasible to utilize a wide variety of sensors for force estimation purposes. Human proprioception allows estimating forces such as the weight of an object by prior experience about sensory-motor patterns. Applying a similar approach to robots enables them to learn from previous demonstrations without the need of dedicated force sensors. This thesis introduces Behavior-Specific Proprioception Models (BSPMs), a novel concept for enhancing robotic behavior with estimates of the expected proprioceptive feedback. A main methodological contribution is the operationalization of the BSPM approach using data-driven machine learning techniques. During a training phase, the behavior is continuously executed while recording proprioceptive sensor readings. The training data acquired from these demonstrations represents ground truth about behavior-specific sensory-motor experiences, i.e., the influence of performed actions and environmental conditions on the proprioceptive feedback. This data acquisition procedure does not require expert knowledge about the particular robot platform, e.g., kinematic chains or mass distribution, which is a major advantage over analytical approaches. The training data is then used to learn BSPMs, e.g. using lazy learning techniques or artificial neural networks. At runtime, the BSPMs provide estimates of the proprioceptive feedback that can be compared to actual sensations. The BSPM approach thus extends classical programming by demonstrations methods where only movement data is learned and enables robots to accurately estimate forces during behavior execution
    corecore