108 research outputs found

    Correlated multi-streaming in distributed interactive multimedia systems

    Get PDF
    Distributed Interactive Multimedia Environments (DIMEs) enable geographically distributed people to interact with each other in a joint media-rich virtual environment for a wide range of activities, such as art performance, medical consultation, sport training, etc. The real-time collaboration is made possible by exchanging a set of multi-modal sensory streams over the network in real time. The characterization and evaluation of such multi-stream interactive environments is challenging because the traditional Quality of Service metrics (e.g., delay, jitter) are limited to a per stream basis. In this work, we present a novel ???Bundle of Streams??? concept to de???ne correlated multi-streams in DIMEs and present new cyber-physical, spatio-temporal QoS metrics to measure QoS over bundle of streams. We realize Bundle of Streams concept by presenting a novel paradigm of Bundle Streaming as a Service (SAS). We propose and develop SAS Kernel, a generic, distributed, modular and highly ???exible streaming kernel realizing SAS concept. We validate the Bundle of Streams model by comparing the QoS performance of bundle of streams over different transport protocols in a 3D tele-immersive testbed. Also, further experiments demonstrate that the SAS Kernel incurs low overhead in delay, CPU, and bandwidth demands

    Proposed Fuzzy Real-Time HaPticS Protocol Carrying Haptic Data and Multisensory Streams

    Get PDF
    Sensory and haptic data transfers to critical real-time applications over the Internet require better than best effort transport, strict timely and reliable ordered deliveries. Multi-sensory applications usually include video and audio streams with real-time control and sensory data, which aggravate and compress within real-time flows. Such real-time are vulnerable to synchronization to synchronization problems, if combined with poor Internet links. Apart from the use of differentiated QoS and MPLS services, several haptic transport protocols have been proposed to confront such issues, focusing on minimizing flows rate disruption while maintaining a steady transmission rate at the sender. Nevertheless, these protocols fail to cope with network variations and queuing delays posed by the Internet routers. This paper proposes a new haptic protocol that tries to alleviate such inadequacies using three different metrics: mean frame delay, jitter and frame loss calculated at the receiver end and propagated to the sender. In order to dynamically adjust flow rate in a fuzzy controlled manners, the proposed protocol includes a fuzzy controller to its protocol structure. The proposed FRTPS protocol (Fuzzy Real-Time haPticS protocol), utilizes crisp inputs into a fuzzification process followed by fuzzy control rules in order to calculate a crisp level output service class, denoted as Service Rate Level (SRL). The experimental results of FRTPS over RTP show that FRTPS outperforms RTP in cases of congestion incidents, out of order deliveries and goodput

    Perception-motivated parallel algorithms for haptics

    Get PDF
    Negli ultimi anni l\u2019utilizzo di dispositivi aptici, atti cio\ue8 a riprodurre l\u2019interazione fisica con l\u2019ambiente remoto o virtuale, si sta diffondendo in vari ambiti della robotica e dell\u2019informatica, dai videogiochi alla chirurgia robotizzata eseguita in teleoperazione, dai cellulari alla riabilitazione. In questo lavoro di tesi abbiamo voluto considerare nuovi punti di vista sull\u2019argomento, allo scopo di comprendere meglio come riportare l\u2019essere umano, che \ue8 l\u2019unico fruitore del ritorno di forza, tattile e di telepresenza, al centro della ricerca sui dispositivi aptici. Allo scopo ci siamo focalizzati su due aspetti: una manipolazione del segnale di forza mutuata dalla percezione umana e l\u2019utilizzo di architetture multicore per l\u2019implementazione di algoritmi aptici e robotici. Con l\u2019aiuto di un setup sperimentale creato ad hoc e attraverso l\u2019utilizzo di un joystick con ritorno di forza a 6 gradi di libert\ue0, abbiamo progettato degli esperimenti psicofisici atti all\u2019identificazione di soglie differenziali di forze/coppie nel sistema mano-braccio. Sulla base dei risultati ottenuti abbiamo determinato una serie di funzioni di scalatura del segnale di forza, una per ogni grado di libert\ue0, che permettono di aumentare l\u2019abilit\ue0 umana nel discriminare stimoli differenti. L\u2019utilizzo di tali funzioni, ad esempio in teleoperazione, richiede la possibilit\ue0 di variare il segnale di feedback e il controllo del dispositivo sia in relazione al lavoro da svolgere, sia alle peculiari capacit\ue0 dell\u2019utilizzatore. La gestione del dispositivo deve quindi essere in grado di soddisfare due obbiettivi tendenzialmente in contrasto, e cio\ue8 il raggiungimento di alte prestazioni in termini di velocit\ue0, stabilit\ue0 e precisione, abbinato alla flessibilit\ue0 tipica del software. Una soluzione consiste nell\u2019affidare il controllo del dispositivo ai nuovi sistemi multicore che si stanno sempre pi\uf9 prepotentemente affacciando sul panorama informatico. Per far ci\uf2 una serie di algoritmi consolidati deve essere portata su sistemi paralleli. In questo lavoro abbiamo dimostrato che \ue8 possibile convertire facilmente vecchi algoritmi gi\ue0 implementati in hardware, e quindi intrinsecamente paralleli. Un punto da definire rimane per\uf2 quanto costa portare degli algoritmi solitamente descritti in VLSI e schemi in un linguaggio di programmazione ad alto livello. Focalizzando la nostra attenzione su un problema specifico, la pseudoinversione di matrici che \ue8 presente in molti algoritmi di dinamica e cinematica, abbiamo mostrato che un\u2019attenta progettazione e decomposizione del problema permette una mappatura diretta sulle unit\ue0 di calcolo disponibili. In aggiunta, l\u2019uso di parallelismo a livello di dati su macchine SIMD permette di ottenere buone prestazioni utilizzando semplici operazioni vettoriali come addizioni e shift. Dato che di solito tali istruzioni fanno parte delle implementazioni hardware la migrazione del codice risulta agevole. Abbiamo testato il nostro approccio su una Sony PlayStation 3 equipaggiata con un processore IBM Cell Broadband Engine.In the last years the use of haptic feedback has been used in several applications, from mobile phones to rehabilitation, from video games to robotic aided surgery. The haptic devices, that are the interfaces that create the stimulation and reproduce the physical interaction with virtual or remote environments, have been studied, analyzed and developed in many ways. Every innovation in the mechanics, electronics and technical design of the device it is valuable, however it is important to maintain the focus of the haptic interaction on the human being, who is the only user of force feedback. In this thesis we worked on two main topics that are relevant to this aim: a perception based force signal manipulation and the use of modern multicore architectures for the implementation of the haptic controller. With the help of a specific experimental setup and using a 6 dof haptic device we designed a psychophysical experiment aimed at identifying of the force/torque differential thresholds applied to the hand-arm system. On the basis of the results obtained we determined a set of task dependent scaling functions, one for each degree of freedom of the three-dimensional space, that can be used to enhance the human abilities in discriminating different stimuli. The perception based manipulation of the force feedback requires a fast, stable and configurable controller of the haptic interface. Thus a solution is to use new available multicore architectures for the implementation of the controller, but many consolidated algorithms have to be ported to these parallel systems. Focusing on specific problem, i.e. the matrix pseudoinversion, that is part of the robotics dynamic and kinematic computation, we showed that it is possible to migrate code that was already implemented in hardware, and in particular old algorithms that were inherently parallel and thus not competitive on sequential processors. The main question that still lies open is how much effort is required in order to write these algorithms, usually described in VLSI or schematics, in a modern programming language. We show that a careful task decomposition and design permit a mapping of the code on the available cores. In addition, the use of data parallelism on SIMD machines can give good performance when simple vector instructions such as add and shift operations are used. Since these instructions are present also in hardware implementations the migration can be easily performed. We tested our approach on a Sony PlayStation 3 game console equipped with IBM Cell Broadband Engine processor

    Nutzerorientierte Evaluation zweier altersgerechter Assistenzroboter zur Unterstützung von Alltagsaktivitäten („Ambient Assisted Living-Roboter“) bei älteren Menschen mit funktionellen Einschränkungen: MOBOT-Rollator und I-SUPPORT-Duschroboter

    Get PDF
    Ziel der vorliegenden Arbeit ist die nutzerorientierte Evaluation zweier Prototypen für altersgerechte Assistenzroboter zur Unterstützung von Alltagsaktivitäten („Ambient Assisted Living“ [AAL]-Roboter) bei älteren Menschen mit funktionellen Einschränkungen. Bei den Prototypen handelt es sich dabei um (1) einen robotergestützten Rollator zur Unterstützung der Mobilität (MOBOT) und (2) einen Assistenzroboter zur Unterstützung von Duschaktivitäten (I-SUPPORT). Manuskript I dokumentiert eine systematische Literaturanalyse des methodischen Vorgehens bisheriger Studien zur Evaluation robotergestützter Rollatoren aus der Nutzerperspektive. Die meisten Studien zeigen erhebliche methodische Mängel, wie unzureichende Stichprobengrößen/-beschreibungen; Teilnehmer nicht repräsentativ für die Nutzergruppe der robotergestützten Rollatoren; keine geeigneten, standardisierten und validierten Assessmentmethoden und/oder keine Inferenzstatistik. Ein generisches methodisches Vorgehen für die Evaluation robotergestützter Rollatoren konnte nicht identifiziert werden. Für die Konzeption und Durchführung zukünftiger Studien zur Evaluation robotergestützter Rollatoren, aber auch anderer AAL-Systeme werden in Manuskript I abschließend Handlungsempfehlungen formuliert. Manuskript II analysiert die Untersuchungsergebnisse der in Manuskript I identifizierten Studien. Es zeigen sich sehr heterogene Ergebnisse hinsichtlich des Mehrwerts der innovativen Assistenzfunktionen von robotergestützten Rollatoren. Im Allgemeinen werden sie jedoch als positiv von den Nutzern wahrgenommen. Die große Heterogenität und methodischen Mängel der Studien schränken die Interpretierbarkeit ihre Untersuchungsergebnisse stark ein. Insgesamt verdeutlicht Manuskript II, dass die Evidenz zur Effektivität und positiven Wahrnehmung robotergestützter Rollatoren aus der Nutzerperspektive noch unzureichend ist. Basierend auf den Erkenntnissen und Handlungsempfehlungen der systematischen Literaturanalysen aus Manuskript I und II wurden die nutzerorientierten Evaluationsstudien des MOBOT-Rollators konzipiert und durchgeführt (Manuskript III-VI). Manuskript III überprüft die Effektivität des in den MOBOT-Rollator integrierten Navigationssystems bei potentiellen Nutzern (= ältere Personen mit Gangstörungen bzw. Rollator als Gehhilfe im Alltag). Es liefert erstmals einen statistischen Nachweis dafür, dass eine solche Assistenzfunktion effektiv ist, um die Navigationsleistung der Nutzer (z. B. geringer Stoppzeit, kürzere Wegstrecke) – insbesondere derjenigen mit kognitiven Einschränkungen – in einem realitätsnahen Anwendungsszenario zu verbessern. Manuskript IV untersucht die konkurrente Validität des MOBOT-integrierten Ganganalysesystems bei potentiellen Nutzern. Im Vergleich zu einem etablierten Referenzstandard (GAITRite®-System) zeigt es eine hohe konkurrente Validität für die Erfassung zeitlicher, nicht jedoch raumbezogener Gangparameter. Diese können zwar ebenfalls mit hoher Konsistenz gemessen werden, aber lediglich mit einer begrenzten absoluten Genauigkeit. Manuskript V umfasst die nutzerorientierte Evaluation der im MOBOT-Rollator integrierten Assistenzfunktion zur Hindernisvermeidung und belegt erstmals die Effektivität einer solchen Funktionen bei potentiellen Nutzern. Unter Verwendung des für den MOBOT-Rollator neu entwickelten technischen Ansatzes für die Hindernisvermeidung zeigten die Teilnehmer signifikante Verbesserungen bei der Bewältigung eines Hindernisparcours (weniger Kollisionen und geringere Annäherungsgeschwindigkeit an die Hindernisse). Manuskript VI dokumentiert die Effektivität und Zufriedenheit mit der Aufstehhilfe des MOBOT-Rollators von potentiellen Nutzern. Es wird gezeigt, dass die Erfolgsrate für den Sitzen-Stehen-Transfer älterer Personen mit motorischen Einschränkungen durch die Aufstehhilfe signifikant verbessert werden kann. Die Ergebnisse belegen zudem eine hohe Nutzerzufriedenheit mit dieser Assistenzfunktion, insbesondere bei Personen mit höherem Body-Mass-Index. Manuskript VII untersucht die Mensch-Roboter-Interaktion zwischen dem I-SUPPORT-Duschroboter und seiner potentiellen Nutzer (= ältere Personen mit Problemen bei Baden/Duschen) und überprüft deren Effektivität sowie Zufriedenheit mit drei unterschiedlich autonomen Betriebsmodi. Die Studienergebnisse dokumentieren, dass sich mit zunehmender Kontrolle des Nutzers (= abnehmende Autonomie des Duschroboters) nicht nur die Effektivität für das Abduschen eines definierten Körperbereichs verringert, sondern auch die Nutzerzufriedenheit sinkt. Manuskript VIII umfasst die Evaluation eines spezifischen Nutzertrainings auf die gestenbasierte Mensch-Roboter-Interaktion mit dem I-SUPPORT-Duschroboter. Es wird gezeigt, dass ein solches Training die Ausführung der Gesten potentieller Nutzer und sowie die Gestenerkennungsrate des Duschroboters signifikant verbessern, was insgesamt auf eine optimierte Mensch-Roboter-Interaktion in Folge des Trainings schließen lässt. Teilnehmer mit der schlechtesten Ausgangsleistung in der Ausführung der Gesten und mit der größten Angst vor Technologien profitierten am meisten vom Nutzertraining. Insgesamt belegen die Studienergebnisse zur nutzerorientierten Evaluation des MOBOT-Rollators die Effektivität und Gültigkeit seiner innovativen Teilfunktionen. Sie weisen auf ein hohes Potential der Assistenzfunktionen (Navigationssystem, Hindernisvermeidung, Aufstehhilfe) zur Verbesserung der Mobilität älterer Menschen mit motorischen Einschränkungen hin. Vor dem Hintergrund der methodischen Mängel und unzureichenden evidenzbasierten Datenlage hierzu, liefert diese Dissertationsschrift erstmals statistische Belege für den Mehrwert solcher Teilfunktionen bei potentiellen Nutzern und leistet somit einen wichtigen Beitrag zur Schließung der bisherigen Forschungslücke hinsichtlich des nutzerorientierten Wirksamkeits- und Gültigkeitsnachweises robotergestützter Rollatoren und ihrer innovativen Teilfunktionen. Die Ergebnisse der Studien des I-SUPPORT-Duschroboters liefern wichtige Erkenntnisse hinsichtlich der Mensch-Roboter-Interaktion im höheren Alter. Sie zeigen, dass bei älteren Nutzern für eine effektive Interaktion Betriebsmodi mit einem hohen Maß an Autonomie des Duschroboters notwendig sind. Trotz ihrer eingeschränkten Kontrolle über den Roboter, waren die Nutzer mit dem autonomsten Betriebsmodus sogar am zufriedensten. Darüber hinaus unterstreichen die Ergebnisse hinsichtlich der gestenbasierten Interaktion mit dem I-SUPPORT-Duschroboter, dass zukünftige Entwicklungen von altersgerechten Assistenzrobotern mit gestenbasierter Interaktion nicht nur die Verbesserungen technischer Aspekte, sondern auch die Sicherstellung und Verbesserungen der Qualität der Nutzergesten für die Mensch-Roboter-Interaktion durch geeignete Trainings- oder Schulungsmaßnahmen berücksichtigen sollten. Das vorgestellte Nutzertraining könnte hierfür ein mögliches Modell darstellen

    Usability of Upper Limb Electromyogram Features as Muscle Fatigue Indicators for Better Adaptation of Human-Robot Interactions

    Get PDF
    Human-robot interaction (HRI) is the process of humans and robots working together to accomplish a goal with the objective of making the interaction beneficial to humans. Closed loop control and adaptability to individuals are some of the important acceptance criteria for human-robot interaction systems. While designing an HRI interaction scheme, it is important to understand the users of the system and evaluate the capabilities of humans and robots. An acceptable HRI solution is expected to be adaptable by detecting and responding to the changes in the environment and its users. Hence, an adaptive robotic interaction will require a better sensing of the human performance parameters. Human performance is influenced by the state of muscular and mental fatigue during active interactions. Researchers in the field of human-robot interaction have been trying to improve the adaptability of the environment according to the physical state of the human participants. Existing human-robot interactions and robot assisted trainings are designed without sufficiently considering the implications of fatigue to the users. Given this, identifying if better outcome can be achieved during a robot-assisted training by adapting to individual muscular status, i.e. with respect to fatigue, is a novel area of research. This has potential applications in scenarios such as rehabilitation robotics. Since robots have the potential to deliver a large number of repetitions, they can be used for training stroke patients to improve their muscular disabilities through repetitive training exercises. The objective of this research is to explore a solution for a longer and less fatiguing robot-assisted interaction, which can adapt based on the muscular state of participants using fatigue indicators derived from electromyogram (EMG) measurements. In the initial part of this research, fatigue indicators from upper limb muscles of healthy participants were identified by analysing the electromyogram signals from the muscles as well as the kinematic data collected by the robot. The tasks were defined to have point-to-point upper limb movements, which involved dynamic muscle contractions, while interacting with the HapticMaster robot. The study revealed quantitatively, which muscles were involved in the exercise and which muscles were more fatigued. The results also indicated the potential of EMG and kinematic parameters to be used as fatigue indicators. A correlation analysis between EMG features and kinematic parameters revealed that the correlation coefficient was impacted by muscle fatigue. As an extension of this study, the EMG collected at the beginning of the task was also used to predict the type of point-to-point movements using a supervised machine learning algorithm based on Support Vector Machines. The results showed that the movement intention could be detected with a reasonably good accuracy within the initial milliseconds of the task. The final part of the research implemented a fatigue-adaptive algorithm based on the identified EMG features. An experiment was conducted with thirty healthy participants to test the effectiveness of this adaptive algorithm. The participants interacted with the HapticMaster robot following a progressive muscle strength training protocol similar to a standard sports science protocol for muscle strengthening. The robotic assistance was altered according to the muscular state of participants, and, thus, offering varying difficulty levels based on the states of fatigue or relaxation, while performing the tasks. The results showed that the fatigue-based robotic adaptation has resulted in a prolonged training interaction, that involved many repetitions of the task. This study showed that using fatigue indicators, it is possible to alter the level of challenge, and thus, increase the interaction time. In summary, the research undertaken during this PhD has successfully enhanced the adaptability of human-robot interaction. Apart from its potential use for muscle strength training in healthy individuals, the work presented in this thesis is applicable in a wide-range of humanmachine interaction research such as rehabilitation robotics. This has a potential application in robot-assisted upper limb rehabilitation training of stroke patients

    Instrumentation, Control, and Intelligent Systems

    Full text link

    Proceedings of the 9th international conference on disability, virtual reality and associated technologies (ICDVRAT 2012)

    Get PDF
    The proceedings of the conferenc

    2D and 3D computer vision analysis of gaze, gender and age

    Get PDF
    Human-Computer Interaction (HCI) has been an active research area for over four decades. Research studies and commercial designs in this area have been largely facilitated by the visual modality which brings diversified functionality and improved usability to HCI interfaces by employing various computer vision techniques. This thesis explores a number of facial cues, such as gender, age and gaze, by performing 2D and 3D based computer vision analysis. The ultimate aim is to create a natural HCI strategy that can fulfil user expectations, augment user satisfaction and enrich user experience by understanding user characteristics and behaviours. To this end, salient features have been extracted and analysed from 2D and 3D face representations; 3D reconstruction algorithms and their compatible real-world imaging systems have been investigated; case study HCI systems have been designed to demonstrate the reliability, robustness, and applicability of the proposed method.More specifically, an unsupervised approach has been proposed to localise eye centres in images and videos accurately and efficiently. This is achieved by utilisation of two types of geometric features and eye models, complemented by an iris radius constraint and a selective oriented gradient filter specifically tailored to this modular scheme. This approach resolves challenges such as interfering facial edges, undesirable illumination conditions, head poses, and the presence of facial accessories and makeup. Tested on 3 publicly available databases (the BioID database, the GI4E database and the extended Yale Face Database b), and a self-collected database, this method outperforms all the methods in comparison and thus proves to be highly accurate and robust. Based on this approach, a gaze gesture recognition algorithm has been designed to increase the interactivity of HCI systems by encoding eye saccades into a communication channel similar to the role of hand gestures. As well as analysing eye/gaze data that represent user behaviours and reveal user intentions, this thesis also investigates the automatic recognition of user demographics such as gender and age. The Fisher Vector encoding algorithm is employed to construct visual vocabularies as salient features for gender and age classification. Algorithm evaluations on three publicly available databases (the FERET database, the LFW database and the FRCVv2 database) demonstrate the superior performance of the proposed method in both laboratory and unconstrained environments. In order to achieve enhanced robustness, a two-source photometric stereo method has been introduced to recover surface normals such that more invariant 3D facia features become available that can further boost classification accuracy and robustness. A 2D+3D imaging system has been designed for construction of a self-collected dataset including 2D and 3D facial data. Experiments show that utilisation of 3D facial features can increase gender classification rate by up to 6% (based on the self-collected dataset), and can increase age classification rate by up to 12% (based on the Photoface database). Finally, two case study HCI systems, a gaze gesture based map browser and a directed advertising billboard, have been designed by adopting all the proposed algorithms as well as the fully compatible imaging system. Benefits from the proposed algorithms naturally ensure that the case study systems can possess high robustness to head pose variation and illumination variation; and can achieve excellent real-time performance. Overall, the proposed HCI strategy enabled by reliably recognised facial cues can serve to spawn a wide array of innovative systems and to bring HCI to a more natural and intelligent state

    Entangled Matters: Analogue Futures & Political Pasts

    Get PDF
    Theorised as an "ontology of the output" my research project conceptually repurposes media machines in order to activate new or alternate entanglements between historical media artefacts and events. Although the particular circumstances that produced these materials may have changed, the project asks why these analogue media artefacts might still be a matter of concern. What is their relevance for problematizing debates within media philosophy today and by extension the politics that underscore the operations of the digital? Does the analogue as I intuit have the capacity to release history and propose alternate pathways through mediatic time? Case Studies: ARCHIVAL FUTURES considers the missing or 'silent' erasure of 18-'12 minutes in Watergate Tape No. 342 (1972). TELE-TRANSMISSIONS explores the 14-minute audio transmission produced by the Muirhead K220 Picture Transmitter to relay the image of napalm victim Kim Phuc from Saigon to Tokyo (June 8 1972). RADIOLOGICAL EVENTS examines thirty-three seconds of irradiated film shot at Chernobyl Reactor Unit 4 by the late Soviet filmmaker Vladimir Shevchenko (April 26 1986). This research turns upon a reconsideration of the ontological temporalities of media matter; a concern both in and with time which acknowledges that each of the now historic machinic artefacts and related case studies have always-already been entangled with the present and coming events of the future. The thesis project as such performs itself as a kind of "tape cutup" that reorganises and consequently troubles the historical record by bringing ostensibly unrelated events into creative juxtaposition with one another. Recording asserts temporality; it is the formal means by which time is engineered, how it is both retroactively repotentialised and prospectively activated. Recording in effect produces a saturated ontology of time in which the reverberations of past, present, and future elide to become enfolded within the temporal vectors of the artefact

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thèse vise à étudier des systèmes et des outils pour la télé-opération d’un robot humanoïde.La téléopération de robots est cruciale pour envoyer et contrôler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scénarios d’interventionen cas de catastrophe, des environnements contaminés ou des sites extraterrestres). Le terme téléopérationdésigne le plus souvent le contrôle direct et continu d’un robot. Dans ce cas, l’opérateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrôle. L’un des principaux défis est de contrôler le robot de manière à garantir son équilibredynamique tout en essayant de suivre les références humaines. De plus, l’opérateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs à distance afind’appréhender la situation ou de se sentir physiquement présent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le réseau de communicationn’est pas idéal. Dans ce cas, les commandes de l’homme au robot ainsi que la rétroaction du robotà l’homme peuvent être retardées. Ces délais peuvent être très gênants pour l’opérateur humain,qui ne peut pas télé-opérer efficacement son avatar robotique.Un autre point crucial à considérer lors de la mise en place d’un système de télé-opérationest le grand nombre de paramètres qui doivent être réglés pour contrôler efficacement les robotstélé-opérés. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventêtre utilisés pour automatiser l’apprentissage de certains paramètres.Dans cette thèse, nous avons proposé un système de télé-opération qui a été testé sur le robothumanoïde iCub. Nous avons utilisé une combinaison de capture de mouvement basée sur latechnologie inertielle comme périphérique de contrôle pour l’humanoïde et un casque de réalitévirtuelle connecté aux caméras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques équivalents en développant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilité du mouvement transféré. Nous avons ensuite implémenté un contrôleur du corps entierpour permettre au robot de suivre le mouvement humain reciblé. Le contrôleur a ensuite étéoptimisé en simulation pour obtenir un bon suivi des mouvements de référence du corps entier,en recourant à un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot réel en quelques essais.Pour télé-opérer les mouvements de marche, nous avons implémenté un mode de télé-opérationde niveau supérieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde référence au robot. Nous avons intégré ce paramètre dans le système de télé-opération, ce quipermet à l’utilisateur de basculer entre les deux modes différents.Un problème majeur empêchant le déploiement de tels systèmes dans des applications réellesest la présence de retards de communication entre l’entrée humaine et le retour du robot: mêmequelques centaines de millisecondes de retard peuvent irrémédiablement perturber l’opérateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un système danslequel un robot humanoïde exécute des commandes avant de les recevoir, de sorte que le retourvisuel semble être synchronisé avec l’opérateur, alors que le robot exécutait les commandes dansle passé. Pour ce faire, le robot prédit en permanence les commandes futures en interrogeant unmodèle d’apprentissage automatique formé sur les trajectoires passées et conditionné aux dernièrescommandes reçues
    • …
    corecore