117 research outputs found

    In silico case studies of compliant robots: AMARSI deliverable 3.3

    Get PDF
    In the deliverable 3.2 we presented how the morphological computing ap- proach can significantly facilitate the control strategy in several scenarios, e.g. quadruped locomotion, bipedal locomotion and reaching. In particular, the Kitty experimental platform is an example of the use of morphological computation to allow quadruped locomotion. In this deliverable we continue with the simulation studies on the application of the different morphological computation strategies to control a robotic system

    Spatial and Timing Regulation of Upper-Limb Movements in Rhythmic Tasks

    Get PDF
    Rhythmic movement is vital to humans and a foundation of such activities as locomotion, handwriting, and repetitive tool use. The spatiotemporal regularity characterizing such movements reflects a level of automaticity and coordination that is believed to emerge from mutually inhibitory or other pattern generating neural networks in the central nervous system. Although many studies have provided descriptions of this regularity and have illuminated the types of sensory information that influence rhythmic behavior, an understanding of how the brain uses sensory feedback to regulate rhythmic behavior on a cycle-by-cycle basis has been elusive. This thesis utilizes the model task of paddle juggling, or vertical ball bouncing, to address how three types of feedback---visual, auditory, and haptic---contribute to spatial and temporal regulation of rhythmic upper-limb movements. We use a multi-level approach in accordance with the well-known dictum of Marr and Poggio. The crux of this thesis describes a method and suite of experiments to understand how the brain uses visual, audio, and haptic feedback to regulate spatial or timing regularity, and formulate acycle-by-cycle description of this control: to wit, the nature and algorithms of sensory-feedback guided regulation. Part I motivates our interest in this problem, by discussing the biological ``hardware'' that the nervous system putatively employs in these movements, and reviewing insights from previous studies of paddle juggling that suggest how the ``hardware'' may manifest itself in these behaviors. The central experimental approach of this thesis is to train participants to perform the paddle juggling task with spatiotemporal regularity (in other words, to achieve limit-cycle behavior), and then interrogate how the brain applies regulates closed-loop performance by perturbing task feedback. In Part II, we review the development of a novel hard-real-time virtual-reality juggling simulator that enabled precise spatial and temporal feedback perturbations. We then outline the central experimental approach, in which we perturb spatial feedback of the ball at apex phase (vision), and timing feedback of collision- (audio and haptic) and apex-phase events to understand spatial and timing regulation. Part III describes two experiments that yield the main research findings of this thesis. In Experiment 1, we use a sinusoidal-perturbation-based system identification approach to determine that spatial and timing feedback are used in two dissociable and complementary control processes: spatial error correction and temporal synchronization. In Experiment 2, a combination of sinusoidal and step perturbations is used to establish that these complementary processes obey different dynamics. Namely, spatial correction is a proportional-integral process based on a one-step memory of feedback, while temporal synchronization is a proportional process that is dependent only on the most recent feedback. We close in Part IV with a discussion of how insights and approaches from this thesis can lead to improved rehabilitation approaches and understanding of the physiological basis of rhythmic movement regulation

    Building Blocks for Cognitive Robots: Embodied Simulation and Schemata in a Cognitive Architecture

    Get PDF
    Hemion N. Building Blocks for Cognitive Robots: Embodied Simulation and Schemata in a Cognitive Architecture. Bielefeld: Bielefeld University; 2013.Building robots with the ability to perform general intelligent action is a primary goal of artificial intelligence research. The traditional approach is to study and model fragments of cognition separately, with the hope that it will somehow be possible to integrate the specialist solutions into a functioning whole. However, while individual specialist systems demonstrate proficiency in their respective niche, current integrated systems remain clumsy in their performance. Recent findings in neurobiology and psychology demonstrate that many regions of the brain are involved not only in one but in a variety of cognitive tasks, suggesting that the cognitive architecture of the brain uses generic computations in a distributed network, instead of specialist computations in local modules. Designing the cognitive architecture for a robot based on these findings could lead to more capable integrated systems. In this thesis, theoretical background on the concept of embodied cognition is provided, and fundamental mechanisms of cognition are discussed that are hypothesized across theories. Based on this background, a view of how to connect elements of the different theories is proposed, providing enough detail to allow computational modeling. The view proposes a network of generic building blocks to be the central component of a cognitive architecture. Each building block learns an internal model for its inputs. Given partial inputs or cues, the building blocks can collaboratively restore missing components, providing the basis for embodied simulation, which in theories of embodied cognition is hypothesized to be a central mechanism of cognition and the basis for many cognitive functions. In simulation experiments, it is demonstrated how the building blocks can be autonomously learned by a robot from its sensorimotor experience, and that the mechanism of embodied simulation allows the robot to solve multiple tasks simultaneously. In summary, this thesis investigates how to develop cognitive robots under the paradigm of embodied cognition. It provides a description of a novel cognitive architecture and thoroughly discusses its relation to a broad body of interdisciplinary literature on embodied cognition. This thesis hence promotes the view that the cognitive system houses a network of active elements, which organize the agent's experiences and collaboratively carry out many cognitive functions. On the long run, it will be inevitable to study complete cognitive systems such as the cognitive architecture described in this thesis, instead of only studying small learning systems separately, to answer the question of how to build truly autonomous cognitive robots

    Using learning algorithms to develop dynamic gaits for legged robots

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2006.Includes bibliographical references (p. 129-134).As more legged robots have begun to be developed for their obvious advantages in overall maneuverability and mobility over rough terrain and difficult obstacles, their shortcomings over flat terrain have become more apparent. These robots plod along at extremely low speeds even when the ground is flat and level due to the fact that virtually all legged robots use a very stable, very slow walking gait to move, regardless of whether the ground is flat or rough. The simplest way of solving this problem is to use the same method as legged animals: simply change the gait from a walk to a faster more dynamic gait in order to increase the robot's speed. It would be extremely useful if legged robots were capable of moving across flat ground at high velocities while still retaining their ability to cross extremely rough or broken ground. Unfortunately, dynamic gaits are quite difficult to program by hand and only minimal research has been done on them. This thesis evaluates the use of two different types of learning algorithms (a genetic algorithm and a modified gradient-climbing reinforcement learning algorithm) as applied to the problem of developing dynamic gaits for a simulation of the Sony Aibo robot.(cont.) The two algorithms are tested using a random starting population and a high-fitness starting population and the results from both tests are compared. The research focuses on three different types of dynamic gaits: the trot, the canter, and the gallop. The efficiencies of the learned gaits are compared to each other in order to try to determine the best type of high-speed gait for use on the Aibo robot. Problems with the design of the Aibo robot as related to performing dynamic gaits are also identified and solutions are proposed.by Brian Schaaf.S.M

    Proceedings of the Post-Graduate Conference on Robotics and Development of Cognition, 10-12 September 2012, Lausanne, Switzerland

    Get PDF
    The aim of the Postgraduate Conference on Robotics and Development of Cognition (RobotDoC-PhD) is to bring together young scientists working on developmental cognitive robotics and its core disciplines. The conference aims to provide both feedback and greater visibility to their research as lively and stimulating discussion can be held amongst participating PhD students and senior researchers. The conference is open to all PhD students and post-doctoral researchers in the field. RobotDoC-PhD conference is an initiative as a part of Marie-Curie Actions ITN RobotDoC and will be organized as a satellite event of the 22nd International Conference on Artificial Neural Networks ICANN 2012

    Proceedings of the Post-Graduate Conference on Robotics and Development of Cognition, 10-12 September 2012, Lausanne, Switzerland

    Get PDF
    The aim of the Postgraduate Conference on Robotics and Development of Cognition (RobotDoC-PhD) is to bring together young scientists working on developmental cognitive robotics and its core disciplines. The conference aims to provide both feedback and greater visibility to their research as lively and stimulating discussion can be held amongst participating PhD students and senior researchers. The conference is open to all PhD students and post-doctoral researchers in the field. RobotDoC-PhD conference is an initiative as a part of Marie-Curie Actions ITN RobotDoC and will be organized as a satellite event of the 22nd International Conference on Artificial Neural Networks ICANN 2012

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Grounding the Meanings in Sensorimotor Behavior using Reinforcement Learning

    Get PDF
    The recent outburst of interest in cognitive developmental robotics is fueled by the ambition to propose ecologically plausible mechanisms of how, among other things, a learning agent/robot could ground linguistic meanings in its sensorimotor behavior. Along this stream, we propose a model that allows the simulated iCub robot to learn the meanings of actions (point, touch, and push) oriented toward objects in robot’s peripersonal space. In our experiments, the iCub learns to execute motor actions and comment on them. Architecturally, the model is composed of three neural-network-based modules that are trained in different ways. The first module, a two-layer perceptron, is trained by back-propagation to attend to the target position in the visual scene, given the low-level visual information and the feature-based target information. The second module, having the form of an actor-critic architecture, is the most distinguishing part of our model, and is trained by a continuous version of reinforcement learning to execute actions as sequences, based on a linguistic command. The third module, an echo-state network, is trained to provide the linguistic description of the executed actions. The trained model generalizes well in case of novel action-target combinations with randomized initial arm positions. It can also promptly adapt its behavior if the action/target suddenly changes during motor execution

    Mental content : consequences of the embodied mind paradigm

    Get PDF
    The central difference between objectivist cognitivist semantics and embodied cognition consists in the fact that the latter is, in contrast to the former, mindful of binding meaning to context-sensitive mental systems. According to Lakoff/Johnson's experientialism, conceptual structures arise from preconceptual kinesthetic image-schematic and basic-level structures. Gallese and Lakoff introduced the notion of exploiting sensorimotor structures for higherlevel cognition. Three different types of X-schemas realise three types of environmentally embedded simulation: Areas that control movements in peri-personal space; canonical neurons of the ventral premotor cortex that fire when a graspable object is represented; the firing of mirror neurons while perceiving certain movements of conspecifics. ..

    Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review

    Get PDF
    It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborate
    • 

    corecore