309 research outputs found

    Acquisition and distribution of synergistic reactive control skills

    Get PDF
    Learning from demonstration is an afficient way to attain a new skill. In the context of autonomous robots, using a demonstration to teach a robot accelerates the robot learning process significantly. It helps to identify feasible solutions as starting points for future exploration or to avoid actions that lead to failure. But the acquisition of pertinent observationa is predicated on first segmenting the data into meaningful sequences. These segments form the basis for learning models capable of recognising future actions and reconstructing the motion to control a robot. Furthermore, learning algorithms for generative models are generally not tuned to produce stable trajectories and suffer from parameter redundancy for high degree of freedom robots This thesis addresses these issues by firstly investigating algorithms, based on dynamic programming and mixture models, for segmentation sensitivity and recognition accuracy on human motion capture data sets of repetitive and categorical motion classes. A stability analysis of the non-linear dynamical systems derived from the resultant mixture model representations aims to ensure that any trajectories converge to the intended target motion as observed in the demonstrations. Finally, these concepts are extended to humanoid robots by deploying a factor analyser for each mixture model component and coordinating the structure into a low dimensional representation of the demonstrated trajectories. This representation can be constructed as a correspondence map is learned between the demonstrator and robot for joint space actions. Applying these algorithms for demonstrating movement skills to robot is a further step towards autonomous incremental robot learning

    Review of Anthropomorphic Head Stabilisation and Verticality Estimation in Robots

    Get PDF
    International audienceIn many walking, running, flying, and swimming animals, including mammals, reptiles, and birds, the vestibular system plays a central role for verticality estimation and is often associated with a head sta-bilisation (in rotation) behaviour. Head stabilisation, in turn, subserves gaze stabilisation, postural control, visual-vestibular information fusion and spatial awareness via the active establishment of a quasi-inertial frame of reference. Head stabilisation helps animals to cope with the computational consequences of angular movements that complicate the reliable estimation of the vertical direction. We suggest that this strategy could also benefit free-moving robotic systems, such as locomoting humanoid robots, which are typically equipped with inertial measurements units. Free-moving robotic systems could gain the full benefits of inertial measurements if the measurement units are placed on independently orientable platforms, such as a human-like heads. We illustrate these benefits by analysing recent humanoid robots design and control approaches

    Gaze control modelling and robotic implementation

    Get PDF
    Although we have the impression that we can process the entire visual field in a single fixation, in reality we would be unable to fully process the information outside of foveal vision if we were unable to move our eyes. Because of acuity limitations in the retina, eye movements are necessary for processing the details of the array. Our ability to discriminate fine detail drops off markedly outside of the fovea in the parafovea (extending out to about 5 degrees on either side of fixation) and in the periphery (everything beyond the parafovea). While we are reading or searching a visual array for a target or simply looking at a new scene, our eyes move every 200-350 ms. These eye movements serve to move the fovea (the high resolution part of the retina encompassing 2 degrees at the centre of the visual field) to an area of interest in order to process it in greater detail. During the actual eye movement (or saccade), vision is suppressed and new information is acquired only during the fixation (the period of time when the eyes remain relatively still). While it is true that we can move our attention independently of where the eyes are fixated, it does not seem to be the case in everyday viewing. The separation between attention and fixation is often attained in very simple tasks; however, in tasks like reading, visual search, and scene perception, covert attention and overt attention (the exact eye location) are tightly linked. Because eye movements are essentially motor movements, it takes time to plan and execute a saccade. In addition, the end-point is pre-selected before the beginning of the movement. There is considerable evidence that the nature of the task influences eye movements. Depending on the task, there is considerable variability both in terms of fixation durations and saccade lengths. It is possible to outline five separate movement systems that put the fovea on a target and keep it there. Each of these movement systems shares the same effector pathway—the three bilateral groups of oculomotor neurons in the brain stem. These five systems include three that keep the fovea on a visual target in the environment and two that stabilize the eye during head movement. Saccadic eye movements shift the fovea rapidly to a visual target in the periphery. Smooth pursuit movements keep the image of a moving target on the fovea. Vergence movements move the eyes in opposite directions so that the image is positioned on both foveae. Vestibulo-ocular movements hold images still on the retina during brief head movements and are driven by signals from the vestibular system. Optokinetic movements hold images during sustained head rotation and are driven by visual stimuli. All eye movements but vergence movements are conjugate: each eye moves the same amount in the same direction. Vergence movements are disconjugate: The eyes move in different directions and sometimes by different amounts. Finally, there are times that the eye must stay still in the orbit so that it can examine a stationary object. Thus, a sixth system, the fixation system, holds the eye still during intent gaze. This requires active suppression of eye movement. Vision is most accurate when the eyes are still. When we look at an object of interest a neural system of fixation actively prevents the eyes from moving. The fixation system is not as active when we are doing something that does not require vision, for example, mental arithmetic. Our eyes explore the world in a series of active fixations connected by saccades. The purpose of the saccade is to move the eyes as quickly as possible. Saccades are highly stereotyped; they have a standard waveform with a single smooth increase and decrease of eye velocity. Saccades are extremely fast, occurring within a fraction of a second, at speeds up to 900°/s. Only the distance of the target from the fovea determines the velocity of a saccadic eye movement. We can change the amplitude and direction of our saccades voluntarily but we cannot change their velocities. Ordinarily there is no time for visual feedback to modify the course of the saccade; corrections to the direction of movement are made in successive saccades. Only fatigue, drugs, or pathological states can slow saccades. Accurate saccades can be made not only to visual targets but also to sounds, tactile stimuli, memories of locations in space, and even verbal commands (“look left”). The smooth pursuit system keeps the image of a moving target on the fovea by calculating how fast the target is moving and moving the eyes accordingly. The system requires a moving stimulus in order to calculate the proper eye velocity. Thus, a verbal command or an imagined stimulus cannot produce smooth pursuit. Smooth pursuit movements have a maximum velocity of about 100°/s, much slower than saccades. The saccadic and smooth pursuit systems have very different central control systems. A coherent integration of these different eye movements, together with the other movements, essentially corresponds to a gating-like effect on the brain areas controlled. The gaze control can be seen in a system that decides which action should be enabled and which should be inhibited and in another that improves the action performance when it is executed. It follows that the underlying guiding principle of the gaze control is the kind of stimuli that are presented to the system, by linking therefore the task that is going to be executed. This thesis aims at validating the strong relation between actions and gaze. In the first part a gaze controller has been studied and implemented in a robotic platform in order to understand the specific features of prediction and learning showed by the biological system. The eye movements integration opens the problem of the best action that should be selected when a new stimuli is presented. The action selection problem is solved by the basal ganglia brain structures that react to the different salience values of the environment. In the second part of this work the gaze behaviour has been studied during a locomotion task. The final objective is to show how the different tasks, such as the locomotion task, imply the salience values that drives the gaze

    Comprehensive review on controller for leader-follower robotic system

    Get PDF
    985-1007This paper presents a comprehensive review of the leader-follower robotics system. The aim of this paper is to find and elaborate on the current trends in the swarm robotic system, leader-follower, and multi-agent system. Another part of this review will focus on finding the trend of controller utilized by previous researchers in the leader-follower system. The controller that is commonly applied by the researchers is mostly adaptive and non-linear controllers. The paper also explores the subject of study or system used during the research which normally employs multi-robot, multi-agent, space flying, reconfigurable system, multi-legs system or unmanned system. Another aspect of this paper concentrates on the topology employed by the researchers when they conducted simulation or experimental studies

    Physical Analysis of Handshaking Between Humans: Mutual Synchronisation and Social Context

    Get PDF
    International audienceOne very popular form of interpersonal interaction used in various situations is the handshake (HS), which is an act that is both physical and social. This article aims to demonstrate that the paradigm of synchrony that refers to the psychology of individuals' temporal movement coordination could also be considered in handshaking. For this purpose, the physical features of the human HS are investigated in two different social situations: greeting and consolation. The duration and frequency of the HS and the force of the grip have been measured and compared using a prototype of a wearable system equipped with several sensors. The results show that an HS can be decomposed into four phases, and after a short physical contact, a synchrony emerges between the two persons who are shaking hands. A statistical analysis conducted on 31 persons showed that, in the two different contexts, there is a significant difference in the duration of HS, but the frequency of motion and time needed to synchronize were not impacted by the context of an interaction

    New approaches to the emerging social neuroscience of human-robot interaction

    Get PDF
    Prehistoric art, like the Venus of Willendorf sculpture, shows that we have always looked for ways to distil fundamental human characteristics and capture them in physically embodied representations of the self. Recently, this undertaking has gained new momentum through the introduction of robots that resemble humans in their shape and their behaviour. These social robots are envisioned to take on important roles: alleviate loneliness, support vulnerable children and serve as helpful companions for the elderly. However, to date, few commercially available social robots are living up to these expectations. Given their importance for an ever older and more socially isolated society, rigorous research at the intersection of psychology, social neuroscience and human-robot interaction is needed to determine to which extent mechanisms active during human-human interaction can be co-opted when we encounter social robots. This thesis takes an anthropocentric approach to answering the question how socially motivated we are to interact with humanoid robots. Across three empirical and one theoretical chapter, I use self-report, behavioural and neural measures relevant to the study of interactions with robots to address this question. With the Social Motivation Theory of Autism as a point of departure, the first empirical chapter (Chapter 3) investigates the relevance of interpersonal synchrony for human-robot interaction. This chapter reports a null effect: participants did not find a robot that synchronised its movement with them on a drawing task more likeable, nor were they more motivated to ask it more questions in a semi-structured interaction scenario. As this chapter heavily relies on self-report as a main outcome measure, Chapter 4 addresses this limitation by adapting an established behavioural paradigm for the study of human-robot interaction. This chapter shows that a failure to conceptually extend an effect in the field of social attentional capture calls for a different approach when seeking to adapt paradigms for HRI. Chapter 5 serves as a moment of reflection on the current state-of-the-art research at the intersection of neuroscience and human-robot interaction. Here, I argue that the future of HRI research will rely on interaction studies with mobile brain imaging systems (like functional near-infrared spectroscopy) that allow data collection during embodied encounters with social robots. However, going forward, the field should slowly and carefully move outside of the lab and into real situations with robots. As the previous chapters have established, well-known effects have to be replicated before they are implemented for robots, and before they are taken out of the lab, into real life. The final empirical chapter (Chapter 6), takes the first step of this proposed slow approach: in addition to establishing the detection rate of a mobile fNIRS system in comparison to fMRI, this chapter contributes a novel way to digitising optode positions by means of photogrammetry. In the final chapter of this thesis, I highlight the main lessons learned conducting studies with social robots. I propose an updated roadmap which takes into account the problems raised in this thesis and emphasise the importance of incorporating more open science practices going forward. Various tools that emerged out of the open science movement will be invaluable for researchers working on this exciting, interdisciplinary endeavour

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Object Handovers: a Review for Robotics

    Full text link
    This article surveys the literature on human-robot object handovers. A handover is a collaborative joint action where an agent, the giver, gives an object to another agent, the receiver. The physical exchange starts when the receiver first contacts the object held by the giver and ends when the giver fully releases the object to the receiver. However, important cognitive and physical processes begin before the physical exchange, including initiating implicit agreement with respect to the location and timing of the exchange. From this perspective, we structure our review into the two main phases delimited by the aforementioned events: 1) a pre-handover phase, and 2) the physical exchange. We focus our analysis on the two actors (giver and receiver) and report the state of the art of robotic givers (robot-to-human handovers) and the robotic receivers (human-to-robot handovers). We report a comprehensive list of qualitative and quantitative metrics commonly used to assess the interaction. While focusing our review on the cognitive level (e.g., prediction, perception, motion planning, learning) and the physical level (e.g., motion, grasping, grip release) of the handover, we briefly discuss also the concepts of safety, social context, and ergonomics. We compare the behaviours displayed during human-to-human handovers to the state of the art of robotic assistants, and identify the major areas of improvement for robotic assistants to reach performance comparable to human interactions. Finally, we propose a minimal set of metrics that should be used in order to enable a fair comparison among the approaches.Comment: Review paper, 19 page

    Movement primitives as a robotic tool to interpret trajectories through learning-by-doing

    Get PDF
    Articulated movements are fundamental in many human and robotic tasks. While humans can learn and generalise arbitrarily long sequences of movements, and particularly can optimise them to fit the constraints and features of their body, robots are often programmed to execute point-to-point precise but fixed patterns. This study proposes a new approach to interpreting and reproducing articulated and complex trajectories as a set of known robot-based primitives. Instead of achieving accurate reproductions, the proposed approach aims at interpreting data in an agent-centred fashion, according to an agent's primitive movements. The method improves the accuracy of a reproduction with an incremental process that seeks first a rough approximation by capturing the most essential features of a demonstrated trajectory. Observing the discrepancy between the demonstrated and reproduced trajectories, the process then proceeds with incremental decompositions and new searches in sub-optimal parts of the trajectory. The aim is to achieve an agent-centred interpretation and progressive learning that fits in the first place the robots' capability, as opposed to a data-centred decomposition analysis. Tests on both geometric and human generated trajectories reveal that the use of own primitives results in remarkable robustness and generalisation properties of the method. In particular, because trajectories are understood and abstracted by means of agent-optimised primitives, the method has two main features: 1) Reproduced trajectories are general and represent an abstraction of the data. 2) The algorithm is capable of reconstructing highly noisy or corrupted data without pre-processing thanks to an implicit and emergent noise suppression and feature detection. This study suggests a novel bio-inspired approach to interpreting, learning and reproducing articulated movements and trajectories. Possible applications include drawing, writing, movement generation, object manipulation, and other tasks where the performance requires human-like interpretation and generalisation capabilities
    corecore