1,874 research outputs found

    Resource Requirements of an Edge-based Digital Twin Service: An Experimental Study 

    Get PDF
    Digital Twin (DT) is a pivotal application under the industrial digital transformation envisaged by the fourth industrial revolution (Industry 4.0). DT defines intelligent and real-time faithful reflections of physical entities such as industrial robots, thus allowing their remote control. Relying on the latest advances in Information and Communication Technologies (ICT), namely Network Function Virtualization (NFV) and Edge-computing, DT can be deployed as an on-demand service in the factories close proximity and offered leveraging radio access technologies. However, with the purpose of achieving the well-known scalability, flexibility, availability and performance guarantees benefits foreseen by the latest ICT, it is steadily required to experimentally profile and assess DT as a Service (DTaaS) solutions. Moreover, the dependencies between the resources claimed by the service and the relative demand and work loads require to be investigated. In this work, an Edge-based Digital Twin solution for remote control of robotic arms is deployed in an experimental testbed where, in compliance with the NFV paradigm, the service has been segmented in virtual network functions. Our research has primarily the objective to evaluate the entanglement among overall service performance and VNFs resource requirements, and the number of robots consuming the service varies. Experimental profiles show the most critical DT features to be the inverse kinematics and trajectory computations. Moreover, the same analysis has been carried out as a function of the industrial processes, namely based on the commands imposed on the robots, and particularly of their abstraction-level, resulting in a novel trade-off between computing and time resources requirements and trajectory guarantees. The derived results provide crucial insights for the design of network service scaling and resource orchestration frameworks dealing with DTaaS applications. Finally, we empirically prove LTE shortage to accommodate the minimum DT latency requirements

    Modeling and Control of Flexible Link Manipulators

    Get PDF
    Autonomous maritime navigation and offshore operations have gained wide attention with the aim of reducing operational costs and increasing reliability and safety. Offshore operations, such as wind farm inspection, sea farm cleaning, and ship mooring, could be carried out autonomously or semi-autonomously by mounting one or more long-reach robots on the ship/vessel. In addition to offshore applications, long-reach manipulators can be used in many other engineering applications such as construction automation, aerospace industry, and space research. Some applications require the design of long and slender mechanical structures, which possess some degrees of flexibility and deflections because of the material used and the length of the links. The link elasticity causes deflection leading to problems in precise position control of the end-effector. So, it is necessary to compensate for the deflection of the long-reach arm to fully utilize the long-reach lightweight flexible manipulators. This thesis aims at presenting a unified understanding of modeling, control, and application of long-reach flexible manipulators. State-of-the-art dynamic modeling techniques and control schemes of the flexible link manipulators (FLMs) are discussed along with their merits, limitations, and challenges. The kinematics and dynamics of a planar multi-link flexible manipulator are presented. The effects of robot configuration and payload on the mode shapes and eigenfrequencies of the flexible links are discussed. A method to estimate and compensate for the static deflection of the multi-link flexible manipulators under gravity is proposed and experimentally validated. The redundant degree of freedom of the planar multi-link flexible manipulator is exploited to minimize vibrations. The application of a long-reach arm in autonomous mooring operation based on sensor fusion using camera and light detection and ranging (LiDAR) data is proposed.publishedVersio

    Towards the development of safe, collaborative robotic freehand ultrasound

    Get PDF
    The use of robotics in medicine is of growing importance for modern health services, as robotic systems have the capacity to improve upon human tasks, thereby enhancing the treatment ability of a healthcare provider. In the medical sector, ultrasound imaging is an inexpensive approach without the high radiation emissions often associated with other modalities, especially when compared to MRI and CT imaging respectively. Over the past two decades, considerable effort has been invested into freehand ultrasound robotics research and development. However, this research has focused on the feasibility of the application, not the robotic fundamentals, such as motion control, calibration, and contextual awareness. Instead, much of the work is concentrated on custom designed robots, ultrasound image generation and visual servoing, or teleoperation. Research based on these topics often suffer from important limitations that impede their use in an adaptable, scalable, and real-world manner. Particularly, while custom robots may be designed for a specific application, commercial collaborative robots are a more robust and economical solution. Otherwise, various robotic ultrasound studies have shown the feasibility of using basic force control, but rarely explore controller tuning in the context of patient safety and deformable skin in an unstructured environment. Moreover, many studies evaluate novel visual servoing approaches, but do not consider the practicality of relying on external measurement devices for motion control. These studies neglect the importance of robot accuracy and calibration, which allow a system to safely navigate its environment while reducing the imaging errors associated with positioning. Hence, while the feasibility of robotic ultrasound has been the focal point in previous studies, there is a lack of attention to what occurs between system design and image output. This thesis addresses limitations of the current literature through three distinct contributions. Given the force-controlled nature of an ultrasound robot, the first contribution presents a closed-loop calibration approach using impedance control and low-cost equipment. Accuracy is a fundamental requirement for high-quality ultrasound image generation and targeting. This is especially true when following a specified path along a patient or synthesizing 2D slices into a 3D ultrasound image. However, even though most industrial robots are inherently precise, they are not necessarily accurate. While robot calibration itself has been extensively studied, many of the approaches rely on expensive and highly delicate equipment. Experimental testing showed that this method is comparable in quality to traditional calibration using a laser tracker. As demonstrated through an experimental study and validated with a laser tracker, the absolute accuracy of a collaborative robot was improved to a maximum error of 0.990mm, representing a 58.4% improvement when compared to the nominal model. The second contribution explores collisions and contact events, as they are a natural by-product of applications involving physical human-robot interaction (pHRI) in unstructured environments. Robot-assisted medical ultrasound is an example of a task where simply stopping the robot upon contact detection may not be an appropriate reaction strategy. Thus, the robot should have an awareness of body contact location to properly plan force-controlled trajectories along the human body using the imaging probe. This is especially true for remote ultrasound systems where safety and manipulability are important elements to consider when operating a remote medical system through a communication network. A framework is proposed for robot contact classification using the built-in sensor data of a collaborative robot. Unlike previous studies, this classification does not discern between intended vs. unintended contact scenarios, but rather classifies what was involved in the contact event. The classifier can discern different ISO/TS 15066:2016 specific body areas along a human-model leg with 89.37% accuracy. Altogether, this contact distinction framework allows for more complex reaction strategies and tailored robot behaviour during pHRI. Lastly, given that the success of an ultrasound task depends on the capability of the robot system to handle pHRI, pure motion control is insufficient. Force control techniques are necessary to achieve effective and adaptable behaviour of a robotic system in the unstructured ultrasound environment while also ensuring safe pHRI. While force control does not require explicit knowledge of the environment, to achieve an acceptable dynamic behaviour, the control parameters must be tuned. The third contribution proposes a simple and effective online tuning framework for force-based robotic freehand ultrasound motion control. Within the context of medical ultrasound, different human body locations have a different stiffness and will require unique tunings. Through real-world experiments with a collaborative robot, the framework tuned motion control for optimal and safe trajectories along a human leg phantom. The optimization process was able to successfully reduce the mean absolute error (MAE) of the motion contact force to 0.537N through the evolution of eight motion control parameters. Furthermore, contextual awareness through motion classification can offer a framework for pHRI optimization and safety through predictive motion behaviour with a future goal of autonomous pHRI. As such, a classification pipeline, trained using the tuning process motion data, was able to reliably classify the future force tracking quality of a motion session with an accuracy of 91.82 %

    Computational neurorehabilitation: modeling plasticity and learning to predict recovery

    Get PDF
    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling – regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity

    Trust-Based Control of Robotic Manipulators in Collaborative Assembly in Manufacturing

    Get PDF
    Human-robot interaction (HRI) is vastly addressed in the field of automation and manufacturing. Most of the HRI literature in manufacturing explored physical human-robot interaction (pHRI) and invested in finding means for ensuring safety and optimized effort sharing amongst a team of humans and robots. The recent emergence of safe, lightweight, and human-friendly robots has opened a new realm for human-robot collaboration (HRC) in collaborative manufacturing. For such robots with the new HRI functionalities to interact closely and effectively with a human coworker, new human-centered controllers that integrate both physical and social interaction are demanded. Social human-robot interaction (sHRI) has been demonstrated in robots with affective abilities in education, social services, health care, and entertainment. Nonetheless, sHRI should not be limited only to those areas. In particular, we focus on human trust in robot as a basis of social interaction. Human trust in robot and robot anthropomorphic features have high impacts on sHRI. Trust is one of the key factors in sHRI and a prerequisite for effective HRC. Trust characterizes the reliance and tendency of human in using robots. Factors within a robotic system (e.g. performance, reliability, or attribute), the task, and the surrounding environment can all impact the trust dynamically. Over-reliance or under-reliance might occur due to improper trust, which results in poor team collaboration, and hence higher task load and lower overall task performance. The goal of this dissertation is to develop intelligent control algorithms for the manipulator robots that integrate both physical and social HRI factors in the collaborative manufacturing. First, the evolution of human trust in a collaborative robot model is identified and verified through a series of human-in-the-loop experiments. This model serves as a computational trust model estimating an objective criterion for the evolution of human trust in robot rather than estimating an individual\u27s actual level of trust. Second, an HRI-based framework is developed for controlling the speed of a robot performing pick and place tasks. The impact of the consideration of the different level of interaction in the robot controller on the overall efficiency and HRI criteria such as human perceived workload and trust and robot usability is studied using a series of human-in-the-loop experiments. Third, an HRI-based framework is developed for planning and controlling the robot motion in performing hand-over tasks to the human. Again, series of human-in-the-loop experimental studies are conducted to evaluate the impact of implementation of the frameworks on overall efficiency and HRI criteria such as human workload and trust and robot usability. Finally, another framework is proposed for the cooperative manipulation of a common object by a team of a human and a robot. This framework proposes a trust-based role allocation strategy for adjusting the proactive behavior of the robot performing a cooperative manipulation task in HRC scenarios. For the mentioned frameworks, the results of the experiments show that integrating HRI in the robot controller leads to a lower human workload while it maintains a threshold level of human trust in robot and does not degrade robot usability and efficiency

    Learning and Reacting with Inaccurate Prediction: Applications to Autonomous Excavation

    Get PDF
    Motivated by autonomous excavation, this work investigates solutions to a class of problem where disturbance prediction is critical to overcoming poor performance of a feedback controller, but where the disturbance prediction is intrinsically inaccurate. Poor feedback controller performance is related to a fundamental control problem: there is only a limited amount of disturbance rejection that feedback compensation can provide. It is known, however, that predictive action can improve the disturbance rejection of a control system beyond the limitations of feedback. While prediction is desirable, the problem in excavation is that disturbance predictions are prone to error due to the variability and complexity of soil-tool interaction forces. This work proposes the use of iterative learning control to map the repetitive components of excavation forces into feedforward commands. Although feedforward action shows useful to improve excavation performance, the non-repetitive nature of soil-tool interaction forces is a source of inaccurate predictions. To explicitly address the use of imperfect predictive compensation, a disturbance observer is used to estimate the prediction error. To quantify inaccuracy in prediction, a feedforward model of excavation disturbances is interpreted as a communication channel that transmits corrupted disturbance previews, for which metrics based on the sensitivity function exist. During field trials the proposed method demonstrated the ability to iteratively achieve a desired dig geometry, independent of the initial feasibility of the excavation passes in relation to actuator saturation. Predictive commands adapted to different soil conditions and passes were repeated autonomously until a pre-specified finish quality of the trench was achieved. Evidence of improvement in disturbance rejection is presented as a comparison of sensitivity functions of systems with and without the use of predictive disturbance compensation

    Improving human robot collaboration through Force/Torque based learning for object manipulation

    Get PDF
    Human–Robot Collaboration (HRC) is a term used to describe tasks in which robots and humans work together to achieve a goal. Unlike traditional industrial robots, collaborative robots need to be adaptive; able to alter their approach to better suit the situation and the needs of the human partner. As traditional programming techniques can struggle with the complexity required, an emerging approach is to learn a skill by observing human demonstration and imitating the motions; commonly known as Learning from Demonstration (LfD). In this work, we present a LfD methodology that combines an ensemble machine learning algorithm (i.e. Random Forest (RF)) with stochastic regression, using haptic information captured from human demonstration. The capabilities of the proposed method are evaluated using two collaborative tasks; co-manipulation of an object (where the human provides the guidance but the robot handles the objects weight) and collaborative assembly of simple interlocking parts. The proposed method is shown to be capable of imitation learning; interpreting human actions and producing equivalent robot motion across a diverse range of initial and final conditions. After verifying that ensemble machine learning can be utilised for real robotics problems, we propose a further extension utilising Weighted Random Forest (WRF) that attaches weights to each tree based on its performance. It is then shown that the WRF approach outperforms RF in HRC tasks.</p
    • …
    corecore