2,871 research outputs found

    The implications of embodiment for behavior and cognition: animal and robotic case studies

    Full text link
    In this paper, we will argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. 'intelligence requires a body', the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. A number of case studies are presented to illustrate the concept. These involve animals and robots and are concentrated around locomotion, grasping, and visual perception. A theoretical scheme that can be used to embed the diverse case studies will be presented. Finally, we will establish a link between the low-level sensory-motor processes and cognition. We will present an embodied view on categorization, and propose the concepts of 'body schema' and 'forward models' as a natural extension of the embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5

    Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

    Get PDF
    The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure

    A neural network-based exploratory learning and motor planning system for co-robots

    Get PDF
    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object

    MUNDUS project : MUltimodal neuroprosthesis for daily upper limb support

    Get PDF
    Background: MUNDUS is an assistive framework for recovering direct interaction capability of severely motor impaired people based on arm reaching and hand functions. It aims at achieving personalization, modularity and maximization of the user’s direct involvement in assistive systems. To this, MUNDUS exploits any residual control of the end-user and can be adapted to the level of severity or to the progression of the disease allowing the user to voluntarily interact with the environment. MUNDUS target pathologies are high-level spinal cord injury (SCI) and neurodegenerative and genetic neuromuscular diseases, such as amyotrophic lateral sclerosis, Friedreich ataxia, and multiple sclerosis (MS). The system can be alternatively driven by residual voluntary muscular activation, head/eye motion, and brain signals. MUNDUS modularly combines an antigravity lightweight and non-cumbersome exoskeleton, closed-loop controlled Neuromuscular Electrical Stimulation for arm and hand motion, and potentially a motorized hand orthosis, for grasping interactive objects. Methods: The definition of the requirements and of the interaction tasks were designed by a focus group with experts and a questionnaire with 36 potential end-users. Five end-users (3 SCI and 2 MS) tested the system in the configuration suitable to their specific level of impairment. They performed two exemplary tasks: reaching different points in the working volume and drinking. Three experts evaluated over a 3-level score (from 0, unsuccessful, to 2, completely functional) the execution of each assisted sub-action. Results: The functionality of all modules has been successfully demonstrated. User’s intention was detected with a 100% success. Averaging all subjects and tasks, the minimum evaluation score obtained was 1.13 ± 0.99 for the release of the handle during the drinking task, whilst all the other sub-actions achieved a mean value above 1.6. All users, but one, subjectively perceived the usefulness of the assistance and could easily control the system. Donning time ranged from 6 to 65 minutes, scaled on the configuration complexity. Conclusions: The MUNDUS platform provides functional assistance to daily life activities; the modules integration depends on the user’s need, the functionality of the system have been demonstrated for all the possible configurations, and preliminary assessment of usability and acceptance is promising

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    Analysis of postures for handwriting on touch screens without using tools

    Get PDF
    The act of handwriting affected the evolutionary development of humans and still impacts the motor cognition of individuals. However, the ubiquitous use of digital technologies has drastically decreased the number of times we really need to pick a pen up and write on paper. Nonetheless, the positive cognitive impact of handwriting is widely recognized, and a possible way to merge the benefits of handwriting and digital writing is to use suitable tools to write over touchscreens or graphics tablets. In this manuscript, we focus on the possibility of using the hand itself as a writing tool. A novel hand posture named FingerPen is introduced, and can be seen as a grasp performed by the hand on the index finger. A comparison with the most common posture that people tend to assume (i.e. index finger-only exploitation) is carried out by means of a biomechanical model. A conducted user study shows that the FingerPen is appreciated by users and leads to accurate writing traits

    An electro-oculogram based vision system for grasp assistive devices - A proof of concept study

    Get PDF
    Humans typically fixate on objects before moving their arm to grasp the object. Patients with ALS disorder can also select the object with their intact eye movement, but are unable to move their limb due to the loss of voluntary muscle control. Though several research works have already achieved success in generating the correct grasp type from their brain measurement, we are still searching for fine controll over an object with a grasp assistive device (orthosis/exoskeleton/robotic arm). Object orientation and object width are two important parameters for controlling the wrist angle and the grasp aperture of the assistive device to replicate a human-like stable grasp. Vision systems are already evolved to measure the geometrical attributes of the object to control the grasp with a prosthetic hand. However, most of the existing vision systems are integrated with electromyography and require some amount of voluntary muscle movement to control the vision system. Due to that reason, those systems are not beneficial for the users with brain-controlled assistive devices. Here, we implemented a vision system which can be controlled through the human gaze. We measured the vertical and horizontal electrooculogram signals and controlled the pan and tilt of a cap-mounted webcam to keep the object of interest in focus and at the centre of the picture. A simple ‘signature’ extraction procedure was also utilized to reduce the algorithmic complexity and system storage capacity. The developed device has been tested with ten healthy participants. We approximated the object orientation and the size of the object and determined an appropriate wrist orientation angle and the grasp aperture size within 22 ms. The combined accuracy exceeded 75%. The integration of the proposed system with the brain-controlled grasp assistive device and increasing the number of grasps can offer more natural manoeuvring in grasp for ALS patients

    A developmental approach to robotic pointing via human–robot interaction

    Get PDF
    This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/)The ability of pointing is recognised as an essential skill of a robot in its communication and social interaction. This paper introduces a developmental learning approach to robotic pointing, by exploiting the interactions between a human and a robot. The approach is inspired through observing the process of human infant development. It works by first applying a reinforcement learning algorithm to guide the robot to create attempt movements towards a salient object that is out of the robot's initial reachable space. Through such movements, a human demonstrator is able to understand the robot desires to touch the target and consequently, to assist the robot to eventually reach the object successfully. The human-robot interaction helps establish the understanding of pointing gestures in the perception of both the human and the robot. From this, the robot can collect the successful pointing gestures in an effort to learn how to interact with humans. Developmental constraints are utilised to drive the entire learning procedure. The work is supported by experimental evaluation, demonstrating that the proposed approach can lead the robot to gradually gain the desirable pointing ability. It also allows that the resulting robot system exhibits similar developmental progress and features as with human infants
    • …
    corecore