8,176 research outputs found

    Can the Archaeology of Manual Specialization Tell Us Anything About Language Evolution? A Survey of the State of Play

    Get PDF
    In this review and position paper we explore the neural substrates for manual specialization and their possible connection with language and speech. We focus on two contrasting hypotheses of the origins of language and manual specialization: the language-first scenario and the tool-use-first scenario. Each one makes specific predictions about hand-use in non-human primates, as well as about the necessity of an association between speech adaptations and population-level right-handedness in the archaeological and fossil records. The concept of handedness is reformulated for archaeologists in terms of manual role specialization, using Guiard's model asymmetric bimanual coordination. This focuses our attention on skilled bimanual tasks in which both upper limbs play complementary roles. We review work eliciting non-human primate hand preferences in co-ordinated bimanual tasks, and relevant archaeological data for estimating the presence or absence of a population-level bias to the right hand as the manipulator in extinct hominin species and in the early prehistory of our own species

    Robot skill learning through human demonstration and interaction

    Get PDF
    Nowadays robots are increasingly involved in more complex and less structured tasks. Therefore, it is highly desirable to develop new approaches to fast robot skill acquisition. This research is aimed to develop an overall framework for robot skill learning through human demonstration and interaction. Through low-level demonstration and interaction with humans, the robot can learn basic skills. These basic skills are treated as primitive actions. In high-level learning, the complex skills demonstrated by the human can be automatically translated into skill scripts which are executed by the robot. This dissertation summarizes my major research activities in robot skill learning. First, a framework for Programming by Demonstration (PbD) with reinforcement learning for human-robot collaborative manipulation tasks is described. With this framework, the robot can learn low level skills such as collaborating with a human to lift a table successfully and efficiently. Second, to develop a high-level skill acquisition system, we explore the use of a 3D sensor to recognize human actions. A Kinect based action recognition system is implemented which considers both object/action dependencies and the sequential constraints. Third, we extend the action recognition framework by fusing information from multimodal sensors which can recognize fine assembly actions. Fourth, a Portable Assembly Demonstration (PAD) system is built which can automatically generate skill scripts from human demonstration. The skill script includes the object type, the tool, the action used, and the assembly state. Finally, the generated skill scripts are implemented by a dual-arm robot. The proposed framework was experimentally evaluated

    A Framework of Hybrid Force/Motion Skills Learning for Robots

    Get PDF
    Human factors and human-centred design philosophy are highly desired in today’s robotics applications such as human-robot interaction (HRI). Several studies showed that endowing robots of human-like interaction skills can not only make them more likeable but also improve their performance. In particular, skill transfer by imitation learning can increase usability and acceptability of robots by the users without computer programming skills. In fact, besides positional information, muscle stiffness of the human arm, contact force with the environment also play important roles in understanding and generating human-like manipulation behaviours for robots, e.g., in physical HRI and tele-operation. To this end, we present a novel robot learning framework based on Dynamic Movement Primitives (DMPs), taking into consideration both the positional and the contact force profiles for human-robot skills transferring. Distinguished from the conventional method involving only the motion information, the proposed framework combines two sets of DMPs, which are built to model the motion trajectory and the force variation of the robot manipulator, respectively. Thus, a hybrid force/motion control approach is taken to ensure the accurate tracking and reproduction of the desired positional and force motor skills. Meanwhile, in order to simplify the control system, a momentum-based force observer is applied to estimate the contact force instead of employing force sensors. To deploy the learned motion-force robot manipulation skills to a broader variety of tasks, the generalization of these DMP models in actual situations is also considered. Comparative experiments have been conducted using a Baxter Robot to verify the effectiveness of the proposed learning framework on real-world scenarios like cleaning a table

    Human-centred design methods : developing scenarios for robot assisted play informed by user panels and field trials

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/ Copyright ElsevierThis article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC1 project that develops a novel robotic toy for children with special needs. The project investigates how robotic toys can become social mediators, encouraging children with special needs to discover a range of play styles, from solitary to collaborative play (with peers, carers/teachers, parents, etc.). This article explains the developmental process of constructing relevant play scenarios for children with different special needs. Results are presented from consultation with panel of experts (therapists, teachers, parents) who advised on the play needs for the various target user groups and who helped investigate how robotic toys could be used as a play tool to assist in the children’s development. Examples from experimental investigations are provided which have informed the development of scenarios throughout the design process. We conclude by pointing out the potential benefit of this work to a variety of research projects and applications involving human–robot interactions.Peer reviewe

    Social attitudes modulate automatic imitation

    Get PDF
    In naturalistic interpersonal settings, mimicry or ‘automatic imitation’ generates liking, affiliation, cooperation and other positive social attitudes. The purpose of this study was to find out whether the relationship between social attitudes and mimicry is bidirectional: Do social attitudes have a direct and specific effect on mimicry? Participants were primed with pro-social, neutral or anti-social words in a scrambled sentence task. They were then tested for mimicry using a stimulus-response compatibility procedure. In this procedure, participants were required to perform a pre-specified movement (e.g. opening their hand) on presentation of a compatible (open) or incompatible (close) hand movement. Reaction time data were collected using electromyography (EMG) and the magnitude of the mimicry / automatic imitation effect was calculated by subtracting reaction times on compatible trials from those on incompatible trials. Pro-social priming produced a larger automatic imitation effect than anti-social priming, indicating that the relationship between mimicry and social attitudes is bidirectional, and that social attitudes have a direct and specific effect on the tendency to imitate behavior without intention or conscious awareness

    Activation of cerebellum and basal ganglia during the observation and execution of manipulative actions

    Get PDF
    Studies on action observation mostly described the activation of a network of cortical areas, while less investigation focused specifically on the activation and role of subcortical nodes. In the present fMRI study, we investigated the recruitment of cerebellum and basal ganglia during the execution and observation of object manipulation performed with the right hand. The observation conditions consisted in: (a) observation of manipulative actions; (b) observation of sequences of random finger movements. In the execution conditions, participants had to perform the same actions or movements as in (a) and (b), respectively. The results of conjunction analysis showed significant shared activations during both observation and execution of manipulation in several subcortical structures, including: (1) cerebellar lobules V, VI, crus I, VIIIa and VIIIb (bilaterally); (2) globus pallidus, bilaterally, and left subthalamic nucleus; (3) red nucleus (bilaterally) and left thalamus. These findings support the hypothesis that the action observation/execution network also involves subcortical structures, such as cerebellum and basal ganglia, forming an integrated network. This suggests possible mechanisms, involving these subcortical structures, underlying learning of new motor skills, through action observation and imitation

    The maker not the tool: The cognitive significance of great ape manual skills

    Get PDF
    Tool-use by chimpanzees has attracted disproportionate attention among primatologists, because of an understandable wish to understand the evolutionary origins of hominin tool use. In archaeology and paleoanthropology, a focus on made-objects is inevitable: there is nothing else to study. However, it is evidently object-directed manual skills, enabling the objects to be made, that are critical in understanding the evolutionary origins of stone-tool manufacture. In this chapter I review object-directed manual skills in living great apes, making comparison where possible with hominin abilities that can be inferred from the archaeological record. To this end, ‘translations’ of terminology between the research traditions are offered. Much of the evidence comes from observation of apes gathering plants that present physical problems for handling and consumption, in addition to the more patchy data from tool use in captivity and the field. The living great apes, like ourselves, build up novel hierarchical structures involving regular sequences of elementary actions, showing co-ordinated manual role differentiation, in modular organizations with the option of iterating subroutines. Further, great apes appear able to use imitation of skilled practitioners as one source of information for this process, implying some ability to ‘see’ below the surface level of action and understand the motor planning of other individual; however, that process does not necessarily involve understanding cause-and-effect or the intentions of other individuals. Finally I consider whether a living non-human ape could effectively knap stone, and if not, what competence is lacking.Postprin

    Learning object, grasping and manipulation activities using hierarchical HMMs

    Full text link
    This article presents a probabilistic algorithm for representing and learning complex manipulation activities performed by humans in everyday life. The work builds on the multi-level Hierarchical Hidden Markov Model (HHMM) framework which allows decomposition of longer-term complex manipulation activities into layers of abstraction whereby the building blocks can be represented by simpler action modules called action primitives. This way, human task knowledge can be synthesised in a compact, effective representation suitable, for instance, to be subsequently transferred to a robot for imitation. The main contribution is the use of a robust framework capable of dealing with the uncertainty or incomplete data inherent to these activities, and the ability to represent behaviours at multiple levels of abstraction for enhanced task generalisation. Activity data from 3D video sequencing of human manipulation of different objects handled in everyday life is used for evaluation. A comparison with a mixed generative-discriminative hybrid model HHMM/SVM (support vector machine) is also presented to add rigour in highlighting the benefit of the proposed approach against comparable state of the art techniques. © 2014 Springer Science+Business Media New York

    Towards gestural understanding for intelligent robots

    Get PDF
    Fritsch JN. Towards gestural understanding for intelligent robots. Bielefeld: Universität Bielefeld; 2012.A strong driving force of scientific progress in the technical sciences is the quest for systems that assist humans in their daily life and make their life easier and more enjoyable. Nowadays smartphones are probably the most typical instances of such systems. Another class of systems that is getting increasing attention are intelligent robots. Instead of offering a smartphone touch screen to select actions, these systems are intended to offer a more natural human-machine interface to their users. Out of the large range of actions performed by humans, gestures performed with the hands play a very important role especially when humans interact with their direct surrounding like, e.g., pointing to an object or manipulating it. Consequently, a robot has to understand such gestures to offer an intuitive interface. Gestural understanding is, therefore, a key capability on the way to intelligent robots. This book deals with vision-based approaches for gestural understanding. Over the past two decades, this has been an intensive field of research which has resulted in a variety of algorithms to analyze human hand motions. Following a categorization of different gesture types and a review of other sensing techniques, the design of vision systems that achieve hand gesture understanding for intelligent robots is analyzed. For each of the individual algorithmic steps – hand detection, hand tracking, and trajectory-based gesture recognition – a separate Chapter introduces common techniques and algorithms and provides example methods. The resulting recognition algorithms are considering gestures in isolation and are often not sufficient for interacting with a robot who can only understand such gestures when incorporating the context like, e.g., what object was pointed at or manipulated. Going beyond a purely trajectory-based gesture recognition by incorporating context is an important prerequisite to achieve gesture understanding and is addressed explicitly in a separate Chapter of this book. Two types of context, user-provided context and situational context, are reviewed and existing approaches to incorporate context for gestural understanding are reviewed. Example approaches for both context types provide a deeper algorithmic insight into this field of research. An overview of recent robots capable of gesture recognition and understanding summarizes the currently realized human-robot interaction quality. The approaches for gesture understanding covered in this book are manually designed while humans learn to recognize gestures automatically during growing up. Promising research targeted at analyzing developmental learning in children in order to mimic this capability in technical systems is highlighted in the last Chapter completing this book as this research direction may be highly influential for creating future gesture understanding systems

    The Effects of a Movement Education Program on the Perceptual-Motor Development of Kindergarten and Grade One Students

    Get PDF
    It was the purpose of this study to investigate the effectiveness of a basic movement education program, presented by a physical education specialist, on the acquisition of perceptual-motor skills of kindergarten and grade one students
    corecore