16 research outputs found

    The ITALK project : A developmental robotics approach to the study of individual, social, and linguistic learning

    Get PDF
    This is the peer reviewed version of the following article: Frank Broz et al, “The ITALK Project: A Developmental Robotics Approach to the Study of Individual, Social, and Linguistic Learning”, Topics in Cognitive Science, Vol 6(3): 534-544, June 2014, which has been published in final form at doi: http://dx.doi.org/10.1111/tops.12099 This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving." Copyright © 2014 Cognitive Science Society, Inc.This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.Peer reviewe

    Deep robot sketching: an application of deep Q-learning networks for human-like sketching

    Get PDF
    © 2023 The Authors. Published by Elsevier B.V. This research has been financed by ALMA, ‘‘Human Centric Algebraic Machine Learning’’, H2020 RIA under EU grant agreement 952091; ROBOASSET, ‘‘Sistemas robĂłticos inteligentes de diagnĂłstico y rehabilitaciĂłn de terapias de miembro superior’’, PID2020-113508RBI00, financed by AEI/10.13039/501100011033; ‘‘RoboCity2030-DIHCM, Madrid Robotics Digital Innovation Hub’’, S2018/NMT-4331, financed by ‘‘Programas de Actividades I+D en la Comunidad de Madrid’’; ‘‘iREHAB: AI-powered Robotic Personalized Rehabilitation’’, ISCIIIAES-2022/003041 financed by ISCIII and UE; and EU structural fundsThe current success of Reinforcement Learning algorithms for its performance in complex environments has inspired many recent theoretical approaches to cognitive science. Artistic environments are studied within the cognitive science community as rich, natural, multi-sensory, multi-cultural environments. In this work, we propose the introduction of Reinforcement Learning for improving the control of artistic robot applications. Deep Q-learning Neural Networks (DQN) is one of the most successful algorithms for the implementation of Reinforcement Learning in robotics. DQN methods generate complex control policies for the execution of complex robot applications in a wide set of environments. Current art painting robot applications use simple control laws that limits the adaptability of the frameworks to a set of simple environments. In this work, the introduction of DQN within an art painting robot application is proposed. The goal is to study how the introduction of a complex control policy impacts the performance of a basic art painting robot application. The main expected contribution of this work is to serve as a first baseline for future works introducing DQN methods for complex art painting robot frameworks. Experiments consist of real world executions of human drawn sketches using the DQN generated policy and TEO, the humanoid robot. Results are compared in terms of similarity and obtained reward with respect to the reference inputs.SecciĂłn Deptal. de Arquitectura de Computadores y AutomĂĄtica (FĂ­sicas)Fac. de Ciencias FĂ­sicasTRUEUniĂłn Europea. H2020Ministerio de Ciencia e InnovaciĂłn (MICINN)/ AEI/10.13039/501100011033;Comunidad de MadridInstituto de Salud Carlos III (ISCIII)/UEROBOTICSLABpu

    Using the Functional Reach Test for Probing the Static Stability of Bipedal Standing in Humanoid Robots Based on the Passive Motion Paradigm

    Get PDF
    The goal of this paper is to analyze the static stability of a computational architecture, based on the Passive Motion Paradigm, for coordinating the redundant degrees of freedom of a humanoid robot during whole-body reaching movements in bipedal standing. The analysis is based on a simulation study that implements the Functional Reach Test, originally developed for assessing the danger of falling in elderly people. The study is carried out in the YARP environment that allows realistic simulations with the iCub humanoid robot

    Integration of an actor-critic model and generative adversarial networks for a Chinese calligraphy robot

    Get PDF
    As a combination of robotic motion planning and Chinese calligraphy culture, robotic calligraphy plays a significant role in the inheritance and education of Chinese calligraphy culture. Most existing calligraphy robots focus on enabling the robots to learn writing through human participation, such as human–robot interactions and manually designed evaluation functions. However, because of the subjectivity of art aesthetics, these existing methods require a large amount of implementation work from human engineers. In addition, the written results cannot be accurately evaluated. To overcome these limitations, in this paper, we propose a robotic calligraphy model that combines a generative adversarial network (GAN) and deep reinforcement learning to enable a calligraphy robot to learn to write Chinese character strokes directly from images captured from Chinese calligraphic textbooks. In our proposed model, to automatically establish an aesthetic evaluation system for Chinese calligraphy, a GAN is first trained to understand and reconstruct stroke images. Then, the discriminator network is independently extracted from the trained GAN and embedded into a variant of the reinforcement learning method, the “actor-critic model”, as a reward function. Thus, a calligraphy robot adopts the improved actor-critic model to learn to write multiple character strokes. The experimental results demonstrate that the proposed model successfully allows a calligraphy robot to write Chinese character strokes based on input stroke images. The performance of our model, compared with the state-of-the-art deep reinforcement learning method, shows the efficacy of the combination approach. In addition, the key technology in this work shows promise as a solution for robotic autonomous assembly

    Movement primitives as a robotic tool to interpret trajectories through learning-by-doing

    Get PDF
    Articulated movements are fundamental in many human and robotic tasks. While humans can learn and generalise arbitrarily long sequences of movements, and particularly can optimise them to fit the constraints and features of their body, robots are often programmed to execute point-to-point precise but fixed patterns. This study proposes a new approach to interpreting and reproducing articulated and complex trajectories as a set of known robot-based primitives. Instead of achieving accurate reproductions, the proposed approach aims at interpreting data in an agent-centred fashion, according to an agent's primitive movements. The method improves the accuracy of a reproduction with an incremental process that seeks first a rough approximation by capturing the most essential features of a demonstrated trajectory. Observing the discrepancy between the demonstrated and reproduced trajectories, the process then proceeds with incremental decompositions and new searches in sub-optimal parts of the trajectory. The aim is to achieve an agent-centred interpretation and progressive learning that fits in the first place the robots' capability, as opposed to a data-centred decomposition analysis. Tests on both geometric and human generated trajectories reveal that the use of own primitives results in remarkable robustness and generalisation properties of the method. In particular, because trajectories are understood and abstracted by means of agent-optimised primitives, the method has two main features: 1) Reproduced trajectories are general and represent an abstraction of the data. 2) The algorithm is capable of reconstructing highly noisy or corrupted data without pre-processing thanks to an implicit and emergent noise suppression and feature detection. This study suggests a novel bio-inspired approach to interpreting, learning and reproducing articulated movements and trajectories. Possible applications include drawing, writing, movement generation, object manipulation, and other tasks where the performance requires human-like interpretation and generalisation capabilities

    GANCCRobot:Generative Adversarial Nets based Chinese Calligraphy Robot

    Get PDF
    Robotic calligraphy, as a typical application of robot movement planning, is of great significance for the inheritance and education of calligraphy culture. The existing implementations of such robots often suffer from its limited ability for font generation and evaluation, leading to poor writing style diversity and writing quality. This paper proposes a calligraphic robotic framework based on the generative adversarial nets (GAN) to address such limitation. The robot implemented using such framework is able to learn to write fundamental Chinese character strokes with rich diversities and good quality that is close to the human level, without the requirement of specifically designed evaluation functions thanks to the employment of the revised GAN. In particular, the type information of the stroke is introduced as condition information, and the latent codes are applied to maximize the style quality of the generated strokes. Experimental results demonstrate that the proposed model enables a calligraphic robot to successfully write fundamental Chinese strokes based on a given type and style, with overall good quality. Although the proposed model was evaluated in this report using calligraphy writing, the underpinning research is readily applicable to many other applications, such as robotic graffiti and character style conversion

    A Developmental Evolutionary Learning Framework for Robotic Chinese Stroke Writing

    Get PDF
    The ability of robots to write Chinese strokes, which is recognized as a sophisticated task, involves complicated kinematic control algorithms. The conventional approaches for robotic writing of Chinese strokes often suffer from limited font generation methods, which limits the ability of robots to perform high-quality writing. This paper instead proposes a developmental evolutionary learning framework that enables a robot to learn to write fundamental Chinese strokes. The framework first considers the learning process of robotic writing as an evolutionary easy-to-difficult procedure. Then, a developmental learning mechanism called “Lift-constraint, act and saturate” that stems from developmental robotics is used to determine how the robot learns tasks ranging from simple to difficult by building on the learning results from the easy tasks. The developmental constraints, which include altitude adjustments, number of mutation points, and stroke trajectory points, determine the learning complexity of robot writing. The developmental algorithm divides the evolutionary procedure into three developmental learning stages. In each stage, the stroke trajectory points gradually increase, while the number of mutation points and adjustment altitudes gradually decrease, allowing the learning difficulties involved in these three stages to be categorized as easy, medium, and difficult. Our robot starts with an easy learning task and then gradually progresses to the medium and difficult tasks. Under various developmental constraint setups in each stage, the robot applies an evolutionary algorithm to handle the basic shapes of the Chinese strokes and eventually acquires the ability to write with good quality. The experimental results demonstrate that the proposed framework allows a calligraphic robot to gradually learn to write five fundamental Chinese strokes and also reveal a developmental pattern similar to that of humans. Compared to an evolutionary algorithm without the developmental mechanism, the proposed framework achieves good writing quality more rapidly

    Biologically inspired robotic perception-action for soft fruit harvesting in vertical growing environments

    Get PDF
    Multiple interlinked factors like demographics, migration patterns, and economics are presently leading to the critical shortage of labour available for low-skilled, physically demanding tasks like soft fruit harvesting. This paper presents a biomimetic robotic solution covering the full ‘Perception-Action’ loop targeting harvesting of strawberries in a state-of-the-art vertical growing environment. The novelty emerges from both dealing with crop/environment variance as well as configuring the robot action system to deal with a range of runtime task constraints. Unlike the commonly used deep neural networks, the proposed perception system uses conditional Generative Adversarial Networks to identify the ripe fruit using synthetic data. The network can effectively train the synthetic data using the image-to-image translation concept, thereby avoiding the tedious work of collecting and labelling the real dataset. Once the harvest-ready fruit is localised using point cloud data generated by a stereo camera, our platform’s action system can coordinate the arm to reach/cut the stem using the Passive Motion Paradigm framework inspired by studies on neural control of movement in the brain. Results from field trials for strawberry detection, reaching/cutting the stem of the fruit, and extension to analysing complex canopy structures/bimanual coordination (searching/picking) are presented. While this article focuses on strawberry harvesting, ongoing research towards adaptation of the architecture to other crops such as tomatoes and sweet peppers is briefly described

    Social Cognition for Human-Robot Symbiosis—Challenges and Building Blocks

    Get PDF
    The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a “positronic” replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the “services” of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework
    corecore