40 research outputs found

    Analiza FPGA implementacije bilateralnih algoritama upravljanja za dodirnu teleoperaciju

    Get PDF
    This paper presents the FPGA implementation of sliding mode control algorithm for bilateral teleoperation, such that, the problem of haptic teleoperation is addressed. The presented study improves haptic fidelity by widening the control bandwidth. For wide control bandwidth, short control periods as well as short sampling periods are required that was achieved by the FPGA. The presented FPGA design methodology applies basic optimization methods in order to meet the required control period as well as the required hardware resource consumption. The circuit specification was performed by the high-level programing language LabVIEW using the fixed-point data type. Hence, short design times for producing the FPGA logic circuit can be achieved. The proposed FPGA-based bilateral teleoperation was validated by master-slave experimental device.Ovaj rad opisuje FPGA implementaciju algoritama upravljanja kliznim režimima za bilateralnu teleoperaciju, pri čemu je opisan problem haptičke teleoperacije. Prikazano istraživanje poboljšava dodirnu pouzdanost proširenjem upravljačkog propusnog pojasa. Za široki propusni pojas, potrebni su kratki upravljački periodi i brzo vrijeme uzorkovanja, što je postignuto primjenom FPGA sklopovlja. Prikazana metodologija za projektiranje FPGA sklopovlja koristi osnovne optimizacijske metode s ciljem postizanja potrebnih upravljačkih perioda i zahtijevane fizičke iskorištenosti sklopovlja. Specifikacije sklopovlja su provedene programskim jezikom visoke razine LabVIEW uz korištenje podataka s nepomičnim decimalnim zarezom. Stoga je moguće implementirati traženu logiku na FPGA sklopovlje u kratkom vremenu. Opisana bilateralna teleoperacija temeljena na FPGA slopovlju je testirana na eksperimentalnom postavu s nadre.enim i podre.enim čvorom

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Haptic communication for remote mobile and manipulator robot operations in hazardous environments

    Get PDF
    Nuclear decommissioning involves the use of remotely deployed mobile vehicles and manipulators controlled via teleoperation systems. Manipulators are used for tooling and sorting tasks, and mobile vehicles are used to locate a manipulator near to the area that it is to be operated upon and also to carry a camera into a remote area for monitoring and assessment purposes. Teleoperations in hazardous environments are often hampered by a lack of visual information. Direct line of sight is often only available through small, thick windows, which often become discoloured and less transparent over time. Ideal camera locations are generally not possible, which can lead to areas of the cell not being visible, or at least difficult to see. Damage to the mobile, manipulator, tool or environment can be very expensive and dangerous. Despite the advances in the recent years of autonomous systems, the nuclear industry prefers generally to ensure that there is a human in the loop. This is due to the safety critical nature of the industry. Haptic interfaces provide a means of allowing an operator to control aspects of a task that would be difficult or impossible to control with impoverished visual feedback alone. Manipulator endeffector force control and mobile vehicle collision avoidance are examples of such tasks. Haptic communication has been integrated with both a Schilling Titan II manipulator teleoperation system and Cybermotion K2A mobile vehicle teleoperation system. The manipulator research was carried out using a real manipulator whereas the mobile research was carried out in simulation. Novel haptic communication generation algorithms have been developed. Experiments have been conducted using both the mobile and the manipulator to assess the performance gains offered by haptic communication. The results of the mobile vehicle experiments show that haptic feedback offered performance improvements in systems where the operator is solely responsible for control of the vehicle. However in systems where the operator is assisted by semi autonomous behaviour that can perform obstacle avoidance, the advantages of haptic feedback were more subtle. The results from the manipulator experiments served to support the results from the mobile vehicle experiments since they also show that haptic feedback does not always improve operator performance. Instead, performance gains rely heavily on the nature of the task, other system feedback channels and operator assistance features. The tasks performed with the manipulator were peg insertion, grinding and drilling.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Haptic communication for remote mobile and manipulator robot operations in hazardous environments

    Get PDF
    Nuclear decommissioning involves the use of remotely deployed mobile vehiclesand manipulators controlled via teleoperation systems. Manipulators are used fortooling and sorting tasks, and mobile vehicles are used to locate a manipulatornear to the area that it is to be operated upon and also to carry a camera into aremote area for monitoring and assessment purposes.Teleoperations in hazardous environments are often hampered by a lack of visualinformation. Direct line of sight is often only available through small, thickwindows, which often become discoloured and less transparent over time. Idealcamera locations are generally not possible, which can lead to areas of the cell notbeing visible, or at least difficult to see. Damage to the mobile, manipulator, toolor environment can be very expensive and dangerous.Despite the advances in the recent years of autonomous systems, the nuclearindustry prefers generally to ensure that there is a human in the loop. This is dueto the safety critical nature of the industry. Haptic interfaces provide a meansof allowing an operator to control aspects of a task that would be difficult orimpossible to control with impoverished visual feedback alone. Manipulator endeffectorforce control and mobile vehicle collision avoidance are examples of suchtasks.Haptic communication has been integrated with both a Schilling Titan II manipulatorteleoperation system and Cybermotion K2A mobile vehicle teleoperationsystem. The manipulator research was carried out using a real manipulatorwhereas the mobile research was carried out in simulation. Novel haptic communicationgeneration algorithms have been developed. Experiments have beenconducted using both the mobile and the manipulator to assess the performancegains offered by haptic communication.The results of the mobile vehicle experiments show that haptic feedback offeredperformance improvements in systems where the operator is solely responsible forcontrol of the vehicle. However in systems where the operator is assisted by semiautonomous behaviour that can perform obstacle avoidance, the advantages ofhaptic feedback were more subtle.The results from the manipulator experiments served to support the results fromthe mobile vehicle experiments since they also show that haptic feedback does notalways improve operator performance. Instead, performance gains rely heavily onthe nature of the task, other system feedback channels and operator assistancefeatures. The tasks performed with the manipulator were peg insertion, grindingand drilling

    Human Machine Interfaces for Teleoperators and Virtual Environments

    Get PDF
    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models

    Task-oriented joint design of communication and computing for Internet of Skills

    Get PDF
    Nowadays, the internet is taking a revolutionary step forward, which is known as Internet of Skills. The Internet of Skills is a concept that refers to a network of sensors, actuators, and machines that enable knowledge, skills, and expertise delivery between people and machines, regardless of their geographical locations. This concept allows an immersive remote operation and access to expertise through virtual and augmented reality, haptic communications, robotics, and other cutting-edge technologies with various applications, including remote surgery and diagnosis in healthcare, remote laboratory and training in education, remote driving in transportation, and advanced manufacturing in Industry 4.0. In this thesis, we investigate three fundamental communication requirements of Internet of Skills applications, namely ultra-low latency, ultra-high reliability, and wireless resource utilization efficiency. Although 5G communications provide cutting-edge solutions for achieving ultra-low latency and ultra-high reliability with good resource utilization efficiency, meeting these requirements is difficult, particularly in long-distance communications where the distance between source and destination is more than 300 km, considering delays and reliability issues in networking components as well as physical limits of the speed of light. Furthermore, resource utilization efficiency must be improved further to accommodate the rapidly increasing number of mobile devices. Therefore, new design techniques that take into account both communication and computing systems with the task-oriented approach are urgently needed to satisfy conflicting latency and reliability requirements while improving resource utilization efficiency. First, we design and implement a 5G-based teleoperation prototype for Internet of Skills applications. We presented two emerging Internet of Skills use cases in healthcare and education. We conducted extensive experiments evaluating local and long-distance communication latency and reliability to gain insights into the current capabilities and limitations. From our local experiments in laboratory environment where both operator and robot in the same room, we observed that communication latency is around 15 ms with a 99.9% packet reception rate (communication reliability). However, communication latency increases up to 2 seconds in long-distance scenarios (between the UK and China), while it is around 50-300 ms within the UK experiments. In addition, our observations revealed that communication reliability and overall system performance do not exhibit a direct correlation. Instead, the number of consecutive packet drops emerged as the decisive factor influencing the overall system performance and user quality of experience. In light of these findings, we proposed a two-way timeout approach. We discarded stale packets to mitigate waiting times effectively and, in turn, reduce the latency. Nevertheless, we observed that the proposed approach reduced latency at the expense of reliability, thus verifying the challenge of the conflicting latency and reliability requirements. Next, we propose a task-oriented prediction and communication co-design framework to meet conflicting latency and reliability requirements. The proposed framework demonstrates the task-oriented joint design of communication and computing systems, where we considered packet losses in communications and prediction errors in prediction algorithms to derive the upper bound for overall system reliability. We revealed the tradeoff between overall system reliability and resource utilization efficiency, where we consider 5G NR as an example communication system. The proposed framework is evaluated with real-data samples and generated synthetic data samples. From the results, the proposed framework achieves better latency and reliability tradeoff with a 77.80% resource utilization efficiency improvement compared to a task-agnostic benchmark. In addition, we demonstrate that deploying a predictor at the receiver side achieves better overall reliability compared to a system that predictor at the transmitter. Finally, we propose an intelligent mode-switching framework to address the resource utilization challenge. We jointly design the communication, user intention recognition, and modeswitching systems to reduce communication load subject to joint task completion probability. We reveal the tradeoff between task prediction accuracy and task observation length, showing that higher prediction accuracy can be achieved when the task observation length increases. The proposed framework achieves more than 90% task prediction accuracy with 60% observation length. We train a DRL agent with real-world data from our teleoperation prototype for modeswitching between teleoperation and autonomous modes. Our results show that the proposed framework achieves up to 50% communication load reduction with similar task completion probability compared to conventional teleoperation

    Perception-motivated parallel algorithms for haptics

    Get PDF
    Negli ultimi anni l\u2019utilizzo di dispositivi aptici, atti cio\ue8 a riprodurre l\u2019interazione fisica con l\u2019ambiente remoto o virtuale, si sta diffondendo in vari ambiti della robotica e dell\u2019informatica, dai videogiochi alla chirurgia robotizzata eseguita in teleoperazione, dai cellulari alla riabilitazione. In questo lavoro di tesi abbiamo voluto considerare nuovi punti di vista sull\u2019argomento, allo scopo di comprendere meglio come riportare l\u2019essere umano, che \ue8 l\u2019unico fruitore del ritorno di forza, tattile e di telepresenza, al centro della ricerca sui dispositivi aptici. Allo scopo ci siamo focalizzati su due aspetti: una manipolazione del segnale di forza mutuata dalla percezione umana e l\u2019utilizzo di architetture multicore per l\u2019implementazione di algoritmi aptici e robotici. Con l\u2019aiuto di un setup sperimentale creato ad hoc e attraverso l\u2019utilizzo di un joystick con ritorno di forza a 6 gradi di libert\ue0, abbiamo progettato degli esperimenti psicofisici atti all\u2019identificazione di soglie differenziali di forze/coppie nel sistema mano-braccio. Sulla base dei risultati ottenuti abbiamo determinato una serie di funzioni di scalatura del segnale di forza, una per ogni grado di libert\ue0, che permettono di aumentare l\u2019abilit\ue0 umana nel discriminare stimoli differenti. L\u2019utilizzo di tali funzioni, ad esempio in teleoperazione, richiede la possibilit\ue0 di variare il segnale di feedback e il controllo del dispositivo sia in relazione al lavoro da svolgere, sia alle peculiari capacit\ue0 dell\u2019utilizzatore. La gestione del dispositivo deve quindi essere in grado di soddisfare due obbiettivi tendenzialmente in contrasto, e cio\ue8 il raggiungimento di alte prestazioni in termini di velocit\ue0, stabilit\ue0 e precisione, abbinato alla flessibilit\ue0 tipica del software. Una soluzione consiste nell\u2019affidare il controllo del dispositivo ai nuovi sistemi multicore che si stanno sempre pi\uf9 prepotentemente affacciando sul panorama informatico. Per far ci\uf2 una serie di algoritmi consolidati deve essere portata su sistemi paralleli. In questo lavoro abbiamo dimostrato che \ue8 possibile convertire facilmente vecchi algoritmi gi\ue0 implementati in hardware, e quindi intrinsecamente paralleli. Un punto da definire rimane per\uf2 quanto costa portare degli algoritmi solitamente descritti in VLSI e schemi in un linguaggio di programmazione ad alto livello. Focalizzando la nostra attenzione su un problema specifico, la pseudoinversione di matrici che \ue8 presente in molti algoritmi di dinamica e cinematica, abbiamo mostrato che un\u2019attenta progettazione e decomposizione del problema permette una mappatura diretta sulle unit\ue0 di calcolo disponibili. In aggiunta, l\u2019uso di parallelismo a livello di dati su macchine SIMD permette di ottenere buone prestazioni utilizzando semplici operazioni vettoriali come addizioni e shift. Dato che di solito tali istruzioni fanno parte delle implementazioni hardware la migrazione del codice risulta agevole. Abbiamo testato il nostro approccio su una Sony PlayStation 3 equipaggiata con un processore IBM Cell Broadband Engine.In the last years the use of haptic feedback has been used in several applications, from mobile phones to rehabilitation, from video games to robotic aided surgery. The haptic devices, that are the interfaces that create the stimulation and reproduce the physical interaction with virtual or remote environments, have been studied, analyzed and developed in many ways. Every innovation in the mechanics, electronics and technical design of the device it is valuable, however it is important to maintain the focus of the haptic interaction on the human being, who is the only user of force feedback. In this thesis we worked on two main topics that are relevant to this aim: a perception based force signal manipulation and the use of modern multicore architectures for the implementation of the haptic controller. With the help of a specific experimental setup and using a 6 dof haptic device we designed a psychophysical experiment aimed at identifying of the force/torque differential thresholds applied to the hand-arm system. On the basis of the results obtained we determined a set of task dependent scaling functions, one for each degree of freedom of the three-dimensional space, that can be used to enhance the human abilities in discriminating different stimuli. The perception based manipulation of the force feedback requires a fast, stable and configurable controller of the haptic interface. Thus a solution is to use new available multicore architectures for the implementation of the controller, but many consolidated algorithms have to be ported to these parallel systems. Focusing on specific problem, i.e. the matrix pseudoinversion, that is part of the robotics dynamic and kinematic computation, we showed that it is possible to migrate code that was already implemented in hardware, and in particular old algorithms that were inherently parallel and thus not competitive on sequential processors. The main question that still lies open is how much effort is required in order to write these algorithms, usually described in VLSI or schematics, in a modern programming language. We show that a careful task decomposition and design permit a mapping of the code on the available cores. In addition, the use of data parallelism on SIMD machines can give good performance when simple vector instructions such as add and shift operations are used. Since these instructions are present also in hardware implementations the migration can be easily performed. We tested our approach on a Sony PlayStation 3 game console equipped with IBM Cell Broadband Engine processor
    corecore