4,018 research outputs found

    Towards a complete multiple-mechanism account of predictive language processing [Commentary on Pickering & Garrod]

    Get PDF
    Although we agree with Pickering & Garrod (P&G) that prediction-by-simulation and prediction-by-association are important mechanisms of anticipatory language processing, this commentary suggests that they: (1) overlook other potential mechanisms that might underlie prediction in language processing, (2) overestimate the importance of prediction-by-association in early childhood, and (3) underestimate the complexity and significance of several factors that might mediate prediction during language processing

    An integrated theory of language production and comprehension

    Get PDF
    Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal

    Finding Rhythm in Speech: A Response to Cummins

    Get PDF
    This paper attempts to address three critical questions left unanswered by Cummins’ review: are rhythm and entrainment physical, perceptual or social phenomena, what are the underlying mechanisms, and what is their role in behaviour such as speech and music? These issues are addressed from the perspective of an engineer/computer-scientist/ roboticist for whom modelling such behaviours within a computational framework not only provides an empirical methodology for validating theoretical claims, but also facilitates the construction of artificial devices that are capable of exhibiting/exploiting those behaviours in the context of human-machine interaction. The paper draws on insights from a range of different perspectives, and attempts to weave them together within a coherent theoretical framework. It is concluded that (i) rhythm and entrainment are phenomena that emerge naturally from the structural coupling within and between even simple systems, (ii) living systems have evolved very effective mechanisms for managing such behaviours for intrinsic and extrinsic gains, and (iii) the fields of energetics and information theory provide the appropriate tools for analysing and characterising such behaviour within a general theoretical framework. It is hoped that these insights will inspire future cross- disciplinary research in these areas, and lead to a deeper understanding of these fundamental behaviours

    Sensorimotor exploration: constraint awareness and social reinforcement in early vocal development

    Get PDF
    This research is motivated by the benefits that knowledge regarding early development in infants may provide to different fields of science. In particular, early sensorimotor exploration behaviors are studied in the framework of developmental robotics. The main objective is about understanding the role of motor constraint awareness and imitative behaviors during sensorimotor exploration. Particular emphasis is placed on prelinguistic vocal development because during this stage infants start to master the motor systems that will later allow them to pronounce their first words. Previous works have demonstrated that goal-directed intrinsically motivated sensorimotor exploration is an essential element for sensorimotor control learning. Moreover, evidence coming from biological sciences strongly suggests that knowledge acquisition is shaped by the environment in which an agent is embedded and the embodiment of the agent itself, including developmental processes that shape what can be learned and when. In this dissertation, we firstly provide a collection of theoretical evidence that supports the relevance of our study. Starting from concepts of cognitive and developmental sciences, we arrived al the conclusion that spoken language, i.e., early \/ocal development, must be studied asan embodied and situated phenomena. Considering a synthetic approach allow us to use robots and realistic simulators as artifacts to study natural cognitive phenomena. In this work, we adopta toy example to test our cognitive architectures and a speech synthesizer that mimics the mechanisms by which humans produce speech. Next, we introduce a mechanism to endow embodied agents with motor constraint awareness. lntrinsic motivation has been studied as an importan! element to explain the emergence of structured developmental stages during early vocal development. However, previous studies failed to acknowledge the constraints imposed by the embodiment and situatedness, al sensory, motor, cognitive and social levels. We assume that during the onset of sensorimotor exploratory behaviors, motor constraints are unknown to the developmental agent. Thus, the agent must discover and learn during exploration what !hose motor constraints are. The agent is endowed with a somesthetic system based on tactile information. This system generales a sensor signal indicating if a motor configuration was reached or not. This information is later used to create a somesthetic model to predict constraint violations. Finally, we propase to include social reinforcement during exploration. Sorne works studying early vocal development have shown that environmental speech shapes the sensory space explored during babbling. More generally, imitative behaviors have been demonstrated to be crucial for early development in children as they constraint the search space.during sensorimotor exploration. Therefore, based on early interactions of infants and caregivers we proposed an imitative mechanism to reinforce intrinsically motivated sensorimotor exploration with relevan! sensory units. Thus, we modified the constraints aware sensorimotor exploration architecture to include a social instructor, expert in sensor units relevant to communication, which interacts with the developmental agent. lnteraction occurs when the learner production is ·enough' similar to one relevan! to communication. In that case, the instructor perceives this similitude and reformulates with the relevan! sensor unit. When the learner perceives an utterance by the instructor, it attempts to imitate it. In general, our results suggest that somesthetic senses and social reinforcement contribute to achieving better results during intrinsically motivated exploration. Achieving lest redundant exploration, decreasing exploration and evaluation errors, as well as showing a clearer picture of developmental transitions.La motivación principal de este trabajo es la magnitud que las contribuciones al conocimiento en relación al desarrollo infantil pueden aportar a diferentes campos de la ciencia. Particularmente, este trabajo se enfoca en el estudio de los comportamientos de autoexploración sensorimotora en un marco robótico e inspirado en el campo de la psicología del desarrollo. Nuestro objetivo principal es entender el papel que juegan las restricciones motoras y los reflejos imitativos durante la exploración espontánea observada en infantes. Así mismo, este trabajo hace especial énfasis en el desarrollo vocal-auditivo en infantes, que les provee con las herramientas que les permitirán producir sus primeras palabras. Trabajos anteriores han demostrado que los comportamientos de autoexploración sensorimotora en niños, la cual ocurre en gran medida por motivaciones intrínsecas, es un elemento importante para aprender a controlar su cuerpo con tal de alcanzar estados sensoriales específicos. Además, evidencia obtenida de estudios biológicos sugiere tajantemente que la adquisición de conocimiento es regulada por el ambiente en el cual un agente cognitivo se desenvuelve y por el cuerpo del agente per se. Incluso, los procesos de desarrollo que ocurren a nivel físico, cognitivo y social también regulan que es aprendido y cuando esto es aprendido. La primera parte de este trabajo provee al lector con la evidencia teórica y práctica que demuestran la relevancia de esta investigación. Recorriendo conceptos que van desde las ciencias cognitivas y del desarrollo, llegamos a la conclusión de que el lenguaje, y por tanto el habla, deben ser estudiados como fenómenos cognitivos que requieren un cuerpo físico y además un ambiente propicio para su existencia. En la actualidad los sistemas robóticos, reales y simulados, pueden ser considerados como elementos para el estudio de los fenómenos cognitivos naturales. En este trabajo consideramos un ejemplo simple para probar las arquitecturas cognitivas que proponemos, y posteriormente utilizamos dichas arquitecturas con un sintetizador de voz similar al mecanismo humano de producción del habla. Como primera contribución de este trabajo proponemos introducir un mecanismo para construir robots capaces de considerar sus propias restricciones motoras durante la etapa de autoexploración sensorimotora. Ciertos mecanismos de motivación intrínseca para exploración sensorimotora han sido estudiados como posibles conductores de las trayectorias de desarrollo observadas durante el desarrollo temprano del habla. Sin embargo, en previos estudios no se consideró o que este desarrollo está a delimitado por restricciones debido al ambiente, al cuerpo físico, y a las capacidades sensoriales, motoras y cognitivas. En nuestra arquitectura, asumimos que un agente artificial no cuenta con conocimiento de sus limitantes motoras, y por tanto debe descubrirlas durante la etapa de autoexploración. Para tal efecto, el agente es proveído de un sistema somatosensorial que le indica cuando una configuración motora viola las restricciones impuestas por el propio cuerpo. Finalmente, como segunda parte de nuestra contribución proponemos incluir un mecanismo para reforzar el aprendizaje durante la autoexploración. Estudios anteriores demostraron que el ambiente lingüístico en que se desarrolla un infante, o un agente artificial, condiciona sus producciones vocales durante la autoexploración o balbuceo. En este trabajo nos enfocamos en el estudio de episodios de imitación que ocurren durante el desarrollo temprano de un agente. Basados en estudios sobre la interacción entre madres e hijos durante la etapa pre lingüística, proponemos un mecanismo para reforzar el aprendizaje durante la autoexploración con unidades sensoriales relevantes. Entonces, a partir de la arquitectura con autoconocimiento de restricciones motores, construimos una arquitectura que incluye un instructor experto en control sensorimotor. Las interacciones entre el aprendiz y el experto ocurren cuando el aprendiz produce una unidad sensorial relevante para la comunicación durante la autoexploración. En este caso, el experto percibe esta similitud y responde reformulando la producción del aprendiz como la unidad relevante. Cuando el aprendiz percibe una acción del experto, inmediatamente intenta imitarlo. Los resultados presentados en este trabajo sugieren que, los sistemas somatosensoriales, y el reforzamiento social contribuyen a lograr mejores resultados durante la etapa de autoexploración sensorimotora motivada intrínsecamente. En este sentido, se logra una exploración menos redundante, los errores de exploración y evaluación disminuyen, y por último se obtiene una imagen más nítida de las transiciones entre etapas del desarrollo.La motivació principal d'aquest treball és la magnitud que les contribucions al coneixement en relació al desenvolupament infantil poden aportar a diferents camps de la ciència. Particularment, aquest treball s'enfoca en l'estudi dels comportaments d’autoexploració sensorimotora en un marc robòtic i inspirat en el camp de la psicologia del desenvolupament. El nostre objectiu principal és entendre el paper que juguen les restriccions motores i els reflexos imitatius durant l’exploració espontània observada en infants. Així mateix, aquest treball fa especial èmfasi en el desenvolupament vocal-auditiu en infants, que els proveeix amb les eines que els permetran produir les seves primeres paraules. Treballs anteriors han demostrat que els comportaments d'autoexploració sensorimotora en nens, la qual ocorre en gran mesura per motivacions intrínseques, és un element important per aprendre a controlar el seu cos per tal d'assolir estats sensorials específics. A més, evidencies obtingudes d'estudis biològics suggereixen que l’adquisició de coneixement és regulada per l'ambient en el qual un agent cognitiu es desenvolupa i pel cos de l'agent per se. Fins i tot, els processos de desenvolupament que ocorren a nivell físic, cognitiu i social també regulen què és après i quan això ès après. La primera part d'aquest treball proveeix el lector amb les evidencies teòrica i pràctica que demostren la rellevància d'aquesta investigació. Recorrent conceptes que van des de les ciències cognitives i del desenvolupament, vam arribar a la conclusió que el llenguatge, i per tant la parla, han de ser estudiats com a fenòmens cognitius que requereixen un cos físic i a més un ambient propici per a la seva existència. En l'actualitat els sistemes robòtics, reals i simulats, poden ser considerats com a elements per a l'estudi dels fenòmens cognitius naturals. En aquest treball considerem un exemple simple per provar les arquitectures cognitives que proposem, i posteriorment utilitzem aquestes arquitectures amb un sintetitzador de veu similar al mecanisme humà de producció de la parla. Com a primera contribució d'aquest treball proposem introduir un mecanisme per construir robots capaços de considerar les seves pròpies restriccions motores durant l'etapa d'autoexploració sensorimotora. Certs mecanismes de motivació intrínseca per exploració sensorimotora han estat estudiats com a possibles conductors de les trajectòries de desenvolupament observades durant el desenvolupament primerenc de la parla. No obstant això, en previs estudis no es va considerar que aquest desenvolupament és delimitat per restriccions a causa de l'ambient, el cos físic, i les capacitats sensorials, motores i cognitives. A la nostra arquitectura, assumim que un agent artificial no compta amb coneixement dels seus limitants motors, i per tant ha de descobrir-los durant l'etapa d'autoexploració. Per a tal efecte, l'agent és proveït d'un sistema somatosensorial que li indica quan una configuració motora viola les restriccions imposades pel propi cos. Finalment, com a segona part de la nostra contribució proposem incloure un mecanisme per reforçar l'aprenentatge durant l'autoexploració. Estudis anteriors han demostrat que l'ambient lingüísticstic en què es desenvolupa un infant, o un agent artificial, condiciona les seves produccions vocals durant l'autoexploració o balboteig. En aquest treball ens enfoquem en l'estudi d'episodis d’imitació que ocorren durant el desenvolupament primerenc d'un agent. Basats en estudis sobre la interacció entre mares i fills durant l'etapa prelingüística, proposem un mecanisme per reforçar l'aprenentatge durant l'autoexploració amb unitats sensorials rellevants. Aleshores, a partir de l'arquitectura amb autoconeixement de restriccions motors, vam construir una arquitectura que inclou un instructor expert en control sensorimotor. Les interaccions entre l'aprenent i l'expert, ocorren quan una producció sensorial de l'aprenent durant l'autoexploració és similar a una unitat sensorial rellevant per a la comunicació. En aquest cas, l'expert percep aquesta similitud i respon reformulant la producció de l'aprenent com la unitat rellevant. Quan l'aprenent percep una acció de l'expert, immediatament intenta imitar-lo. Els resultats presentats en aquest treball suggereixen que els sistemes somatosensorials i el reforçament social contribueixen a aconseguir millors resultats durant l'etapa d'autoexploració sensorimotora motivada intrínsecament. En aquest sentit, s'aconsegueix una exploració menys redundant, els errors d’exploració i avaluació disminueixen, i finalment s’obté una imatge més nítida de les transicions entre etapes del desenvolupamen

    Sensorimotor exploration: constraint awareness and social reinforcement in early vocal development

    Get PDF
    Aplicat embargament entra la data de defensa i el dia 31 d'agost de 2019This research is motivated by the benefits that knowledge regarding early development in infants may provide to different fields of science. In particular, early sensorimotor exploration behaviors are studied in the framework of developmental robotics. The main objective is about understanding the role of motor constraint awareness and imitative behaviors during sensorimotor exploration. Particular emphasis is placed on prelinguistic vocal development because during this stage infants start to master the motor systems that will later allow them to pronounce their first words. Previous works have demonstrated that goal-directed intrinsically motivated sensorimotor exploration is an essential element for sensorimotor control learning. Moreover, evidence coming from biological sciences strongly suggests that knowledge acquisition is shaped by the environment in which an agent is embedded and the embodiment of the agent itself, including developmental processes that shape what can be learned and when. In this dissertation, we firstly provide a collection of theoretical evidence that supports the relevance of our study. Starting from concepts of cognitive and developmental sciences, we arrived al the conclusion that spoken language, i.e., early \/ocal development, must be studied asan embodied and situated phenomena. Considering a synthetic approach allow us to use robots and realistic simulators as artifacts to study natural cognitive phenomena. In this work, we adopta toy example to test our cognitive architectures and a speech synthesizer that mimics the mechanisms by which humans produce speech. Next, we introduce a mechanism to endow embodied agents with motor constraint awareness. lntrinsic motivation has been studied as an importan! element to explain the emergence of structured developmental stages during early vocal development. However, previous studies failed to acknowledge the constraints imposed by the embodiment and situatedness, al sensory, motor, cognitive and social levels. We assume that during the onset of sensorimotor exploratory behaviors, motor constraints are unknown to the developmental agent. Thus, the agent must discover and learn during exploration what !hose motor constraints are. The agent is endowed with a somesthetic system based on tactile information. This system generales a sensor signal indicating if a motor configuration was reached or not. This information is later used to create a somesthetic model to predict constraint violations. Finally, we propase to include social reinforcement during exploration. Sorne works studying early vocal development have shown that environmental speech shapes the sensory space explored during babbling. More generally, imitative behaviors have been demonstrated to be crucial for early development in children as they constraint the search space.during sensorimotor exploration. Therefore, based on early interactions of infants and caregivers we proposed an imitative mechanism to reinforce intrinsically motivated sensorimotor exploration with relevan! sensory units. Thus, we modified the constraints aware sensorimotor exploration architecture to include a social instructor, expert in sensor units relevant to communication, which interacts with the developmental agent. lnteraction occurs when the learner production is ·enough' similar to one relevan! to communication. In that case, the instructor perceives this similitude and reformulates with the relevan! sensor unit. When the learner perceives an utterance by the instructor, it attempts to imitate it. In general, our results suggest that somesthetic senses and social reinforcement contribute to achieving better results during intrinsically motivated exploration. Achieving lest redundant exploration, decreasing exploration and evaluation errors, as well as showing a clearer picture of developmental transitions.La motivación principal de este trabajo es la magnitud que las contribuciones al conocimiento en relación al desarrollo infantil pueden aportar a diferentes campos de la ciencia. Particularmente, este trabajo se enfoca en el estudio de los comportamientos de autoexploración sensorimotora en un marco robótico e inspirado en el campo de la psicología del desarrollo. Nuestro objetivo principal es entender el papel que juegan las restricciones motoras y los reflejos imitativos durante la exploración espontánea observada en infantes. Así mismo, este trabajo hace especial énfasis en el desarrollo vocal-auditivo en infantes, que les provee con las herramientas que les permitirán producir sus primeras palabras. Trabajos anteriores han demostrado que los comportamientos de autoexploración sensorimotora en niños, la cual ocurre en gran medida por motivaciones intrínsecas, es un elemento importante para aprender a controlar su cuerpo con tal de alcanzar estados sensoriales específicos. Además, evidencia obtenida de estudios biológicos sugiere tajantemente que la adquisición de conocimiento es regulada por el ambiente en el cual un agente cognitivo se desenvuelve y por el cuerpo del agente per se. Incluso, los procesos de desarrollo que ocurren a nivel físico, cognitivo y social también regulan que es aprendido y cuando esto es aprendido. La primera parte de este trabajo provee al lector con la evidencia teórica y práctica que demuestran la relevancia de esta investigación. Recorriendo conceptos que van desde las ciencias cognitivas y del desarrollo, llegamos a la conclusión de que el lenguaje, y por tanto el habla, deben ser estudiados como fenómenos cognitivos que requieren un cuerpo físico y además un ambiente propicio para su existencia. En la actualidad los sistemas robóticos, reales y simulados, pueden ser considerados como elementos para el estudio de los fenómenos cognitivos naturales. En este trabajo consideramos un ejemplo simple para probar las arquitecturas cognitivas que proponemos, y posteriormente utilizamos dichas arquitecturas con un sintetizador de voz similar al mecanismo humano de producción del habla. Como primera contribución de este trabajo proponemos introducir un mecanismo para construir robots capaces de considerar sus propias restricciones motoras durante la etapa de autoexploración sensorimotora. Ciertos mecanismos de motivación intrínseca para exploración sensorimotora han sido estudiados como posibles conductores de las trayectorias de desarrollo observadas durante el desarrollo temprano del habla. Sin embargo, en previos estudios no se consideró o que este desarrollo está a delimitado por restricciones debido al ambiente, al cuerpo físico, y a las capacidades sensoriales, motoras y cognitivas. En nuestra arquitectura, asumimos que un agente artificial no cuenta con conocimiento de sus limitantes motoras, y por tanto debe descubrirlas durante la etapa de autoexploración. Para tal efecto, el agente es proveído de un sistema somatosensorial que le indica cuando una configuración motora viola las restricciones impuestas por el propio cuerpo. Finalmente, como segunda parte de nuestra contribución proponemos incluir un mecanismo para reforzar el aprendizaje durante la autoexploración. Estudios anteriores demostraron que el ambiente lingüístico en que se desarrolla un infante, o un agente artificial, condiciona sus producciones vocales durante la autoexploración o balbuceo. En este trabajo nos enfocamos en el estudio de episodios de imitación que ocurren durante el desarrollo temprano de un agente. Basados en estudios sobre la interacción entre madres e hijos durante la etapa pre lingüística, proponemos un mecanismo para reforzar el aprendizaje durante la autoexploración con unidades sensoriales relevantes. Entonces, a partir de la arquitectura con autoconocimiento de restricciones motores, construimos una arquitectura que incluye un instructor experto en control sensorimotor. Las interacciones entre el aprendiz y el experto ocurren cuando el aprendiz produce una unidad sensorial relevante para la comunicación durante la autoexploración. En este caso, el experto percibe esta similitud y responde reformulando la producción del aprendiz como la unidad relevante. Cuando el aprendiz percibe una acción del experto, inmediatamente intenta imitarlo. Los resultados presentados en este trabajo sugieren que, los sistemas somatosensoriales, y el reforzamiento social contribuyen a lograr mejores resultados durante la etapa de autoexploración sensorimotora motivada intrínsecamente. En este sentido, se logra una exploración menos redundante, los errores de exploración y evaluación disminuyen, y por último se obtiene una imagen más nítida de las transiciones entre etapas del desarrollo.La motivació principal d'aquest treball és la magnitud que les contribucions al coneixement en relació al desenvolupament infantil poden aportar a diferents camps de la ciència. Particularment, aquest treball s'enfoca en l'estudi dels comportaments d’autoexploració sensorimotora en un marc robòtic i inspirat en el camp de la psicologia del desenvolupament. El nostre objectiu principal és entendre el paper que juguen les restriccions motores i els reflexos imitatius durant l’exploració espontània observada en infants. Així mateix, aquest treball fa especial èmfasi en el desenvolupament vocal-auditiu en infants, que els proveeix amb les eines que els permetran produir les seves primeres paraules. Treballs anteriors han demostrat que els comportaments d'autoexploració sensorimotora en nens, la qual ocorre en gran mesura per motivacions intrínseques, és un element important per aprendre a controlar el seu cos per tal d'assolir estats sensorials específics. A més, evidencies obtingudes d'estudis biològics suggereixen que l’adquisició de coneixement és regulada per l'ambient en el qual un agent cognitiu es desenvolupa i pel cos de l'agent per se. Fins i tot, els processos de desenvolupament que ocorren a nivell físic, cognitiu i social també regulen què és après i quan això ès après. La primera part d'aquest treball proveeix el lector amb les evidencies teòrica i pràctica que demostren la rellevància d'aquesta investigació. Recorrent conceptes que van des de les ciències cognitives i del desenvolupament, vam arribar a la conclusió que el llenguatge, i per tant la parla, han de ser estudiats com a fenòmens cognitius que requereixen un cos físic i a més un ambient propici per a la seva existència. En l'actualitat els sistemes robòtics, reals i simulats, poden ser considerats com a elements per a l'estudi dels fenòmens cognitius naturals. En aquest treball considerem un exemple simple per provar les arquitectures cognitives que proposem, i posteriorment utilitzem aquestes arquitectures amb un sintetitzador de veu similar al mecanisme humà de producció de la parla. Com a primera contribució d'aquest treball proposem introduir un mecanisme per construir robots capaços de considerar les seves pròpies restriccions motores durant l'etapa d'autoexploració sensorimotora. Certs mecanismes de motivació intrínseca per exploració sensorimotora han estat estudiats com a possibles conductors de les trajectòries de desenvolupament observades durant el desenvolupament primerenc de la parla. No obstant això, en previs estudis no es va considerar que aquest desenvolupament és delimitat per restriccions a causa de l'ambient, el cos físic, i les capacitats sensorials, motores i cognitives. A la nostra arquitectura, assumim que un agent artificial no compta amb coneixement dels seus limitants motors, i per tant ha de descobrir-los durant l'etapa d'autoexploració. Per a tal efecte, l'agent és proveït d'un sistema somatosensorial que li indica quan una configuració motora viola les restriccions imposades pel propi cos. Finalment, com a segona part de la nostra contribució proposem incloure un mecanisme per reforçar l'aprenentatge durant l'autoexploració. Estudis anteriors han demostrat que l'ambient lingüísticstic en què es desenvolupa un infant, o un agent artificial, condiciona les seves produccions vocals durant l'autoexploració o balboteig. En aquest treball ens enfoquem en l'estudi d'episodis d’imitació que ocorren durant el desenvolupament primerenc d'un agent. Basats en estudis sobre la interacció entre mares i fills durant l'etapa prelingüística, proposem un mecanisme per reforçar l'aprenentatge durant l'autoexploració amb unitats sensorials rellevants. Aleshores, a partir de l'arquitectura amb autoconeixement de restriccions motors, vam construir una arquitectura que inclou un instructor expert en control sensorimotor. Les interaccions entre l'aprenent i l'expert, ocorren quan una producció sensorial de l'aprenent durant l'autoexploració és similar a una unitat sensorial rellevant per a la comunicació. En aquest cas, l'expert percep aquesta similitud i respon reformulant la producció de l'aprenent com la unitat rellevant. Quan l'aprenent percep una acció de l'expert, immediatament intenta imitar-lo. Els resultats presentats en aquest treball suggereixen que els sistemes somatosensorials i el reforçament social contribueixen a aconseguir millors resultats durant l'etapa d'autoexploració sensorimotora motivada intrínsecament. En aquest sentit, s'aconsegueix una exploració menys redundant, els errors d’exploració i avaluació disminueixen, i finalment s’obté una imatge més nítida de les transicions entre etapes del desenvolupamentPostprint (published version

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe

    Vocal Interactivity in-and-between Humans, Animals, and Robots

    Get PDF
    Almost all animals exploit vocal signals for a range of ecologically motivated purposes: detecting predators/prey and marking territory, expressing emotions, establishing social relations, and sharing information. Whether it is a bird raising an alarm, a whale calling to potential partners, a dog responding to human commands, a parent reading a story with a child, or a business-person accessing stock prices using Siri, vocalization provides a valuable communication channel through which behavior may be coordinated and controlled, and information may be distributed and acquired. Indeed, the ubiquity of vocal interaction has led to research across an extremely diverse array of fields, from assessing animal welfare, to understanding the precursors of human language, to developing voice-based human–machine interaction. Opportunities for cross-fertilization between these fields abound; for example, using artificial cognitive agents to investigate contemporary theories of language grounding, using machine learning to analyze different habitats or adding vocal expressivity to the next generation of language-enabled autonomous social agents. However, much of the research is conducted within well-defined disciplinary boundaries, and many fundamental issues remain. This paper attempts to redress the balance by presenting a comparative review of vocal interaction within-and-between humans, animals, and artificial agents (such as robots), and it identifies a rich set of open research questions that may benefit from an interdisciplinary analysis

    Bridging the gap between emotion and joint action

    Get PDF
    Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) together, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies

    Vocal accommodation in human-computer interaction : modeling and integration into spoken dialogue systems

    Get PDF
    With the rapidly increasing usage of voice-activated devices worldwide, verbal communication with computers is steadily becoming more common. Although speech is the principal natural manner of human communication, it is still challenging for computers, and users had been growing accustomed to adjusting their speaking style for computers. Such adjustments occur naturally, and typically unconsciously, in humans during an exchange to control the social distance between the interlocutors and improve the conversation’s efficiency. This phenomenon is called accommodation and it occurs on various modalities in human communication, like hand gestures, facial expressions, eye gaze, lexical and grammatical choices, and others. Vocal accommodation deals with phonetic-level changes occurring in segmental and suprasegmental features. A decrease in the difference between the speakers’ feature realizations results in convergence, while an increasing distance leads to divergence. The lack of such mutual adjustments made naturally by humans in computers’ speech creates a gap between human-human and human-computer interactions. Moreover, voice-activated systems currently speak in exactly the same manner to all users, regardless of their speech characteristics or realizations of specific features. Detecting phonetic variations and generating adaptive speech output would enhance user personalization, offer more human-like communication, and ultimately should improve the overall interaction experience. Thus, investigating these aspects of accommodation will help to understand and improving human-computer interaction. This thesis provides a comprehensive overview of the required building blocks for a roadmap toward the integration of accommodation capabilities into spoken dialogue systems. These include conducting human-human and human-computer interaction experiments to examine the differences in vocal behaviors, approaches for modeling these empirical findings, methods for introducing phonetic variations in synthesized speech, and a way to combine all these components into an accommodative system. While each component is a wide research field by itself, they depend on each other and hence should be jointly considered. The overarching goal of this thesis is therefore not only to show how each of the aspects can be further developed, but also to demonstrate and motivate the connections between them. A special emphasis is put throughout the thesis on the importance of the temporal aspect of accommodation. Humans constantly change their speech over the course of a conversation. Therefore, accommodation processes should be treated as continuous, dynamic phenomena. Measuring differences in a few discrete points, e.g., beginning and end of an interaction, may leave many accommodation events undiscovered or overly smoothed. To justify the effort of introducing accommodation in computers, it should first be proven that humans even show any phonetic adjustments when talking to a computer as they do with a human being. As there is no definitive metric for measuring accommodation and evaluating its quality, it is important to empirically study humans productions to later use as references for possible behaviors. In this work, this investigation encapsulates different experimental configurations to achieve a better picture of accommodation effects. First, vocal accommodation was inspected where it naturally occurs, namely in spontaneous human-human conversations. For this purpose, a collection of real-world sales conversations, each with a different representative-prospect pair, was collected and analyzed. These conversations offer a glance into accommodation effects in authentic, unscripted interactions with the common goal of negotiating a deal on the one hand, but with the individual facet of each side of trying to get the best terms on the other hand. The conversations were analyzed using cross-correlation and time series techniques to capture the change dynamics over time. It was found that successful conversations are distinguishable from failed ones by multiple measures. Furthermore, the sales representative proved to be better at leading the vocal changes, i.e., making the prospect follow their speech styles rather than the other way around. They also showed a stronger tendency to take that lead at an earlier stage, all the more so in successful conversations. The fact that accommodation occurs more by trained speakers and improves their performances fits anecdotal best practices of sales experts, which are now also proven scientifically. Following these results, the next experiment came closer to the final goal of this work and investigated vocal accommodation effects in human-computer interaction. This was done via a shadowing experiment, which offers a controlled setting for examining phonetic variations. As spoken dialogue systems with such accommodation capabilities (like this work aims to achieve) do not exist yet, a simulated system was used to introduce these changes to the participants, who believed they help with the testing of a language learning tutoring system. After determining their preference concerning three segmental phonetic features, participants were listen-ing to either natural or synthesized voices of male and female speakers, which produced the participants’ dispreferred variation of the aforementioned features. Accommodation occurred in all cases, but the natural voices triggered stronger effects. Nevertheless, it can be concluded that participants were accommodating toward synthetic voices as well, which means that social mechanisms are applied in humans also when speaking with computer-based interlocutors. The shadowing paradigm was utilized also to test whether accommodation is a phenomenon associated only with speech or with other vocal productions as well. To that end, accommodation in the singing of familiar and novel music was examined. Interestingly, accommodation was found in both cases, though in different ways. While participants seemed to use the familiar piece merely as a reference for singing more accurately, the novel piece became the goal for complete replicate. For example, one difference was that mostly pitch corrections were introduced in the former case, while in the latter also key and rhythmic patterns were adopted. Some of those findings were expected and they show that people’s more salient features are also harder to modify using external auditory influence. Lastly, a multiparty experiment with spontaneous human-human-computer interactions was carried out to compare accommodation in human-directed and computer-directed speech. The participants solved tasks for which they needed to talk both with a confederate and with an agent. This allows a direct comparison of their speech based on the addressee within the same conversation, which has not been done so far. Results show that some participants’ vocal behavior changed similarly when talking to the confederate and the agent, while others’ speech varied only with the confederate. Further analysis found that the greatest factor for this difference was the order in which the participants talked with the interlocutors. Apparently, those who first talked to the agent alone saw it more as a social actor in the conversation, while those who interacted with it after talking to the confederate treated it more as a means to achieve a goal, and thus behaved differently with it. In the latter case, the variations in the human-directed speech were much more prominent. Differences were also found between the analyzed features, but the task type did not influence the degree of accommodation effects. The results of these experiments lead to the conclusion that vocal accommodation does occur in human-computer interactions, even if often to lesser degrees. With the question of whether people accommodate to computer-based interlocutors as well answered, the next step would be to describe accommodative behaviors in a computer-processable manner. Two approaches are proposed here: computational and statistical. The computational model aims to capture the presumed cognitive process associated with accommodation in humans. This comprises various steps, such as detecting the variable feature’s sound, adding instances of it to the feature’s mental memory, and determining how much the sound will change while taking into account both its current representation and the external input. Due to its sequential nature, this model was implemented as a pipeline. Each of the pipeline’s five steps corresponds to a specific part of the cognitive process and can have one or more parameters to control its output (e.g., the size of the feature’s memory or the accommodation pace). Using these parameters, precise accommodative behaviors can be crafted while applying expert knowledge to motivate the chosen parameter values. These advantages make this approach suitable for experimentation with pre-defined, deterministic behaviors where each step can be changed individually. Ultimately, this approach makes a system vocally responsive to users’ speech input. The second approach grants more evolved behaviors, by defining different core behaviors and adding non-deterministic variations on top of them. This resembles human behavioral patterns, as each person has a base way of accommodating (or not accommodating), which may arbitrarily change based on the specific circumstances. This approach offers a data-driven statistical way to extract accommodation behaviors from a given collection of interactions. First, the target feature’s values of each speaker in an interaction are converted into continuous interpolated lines by drawing one sample from the posterior distribution of a Gaussian process conditioned on the given values. Then, the gradients of these lines, which represent rates of mutual change, are used to defined discrete levels of change based on their distribution. Finally, each level is assigned a symbol, which ultimately creates a symbol sequence representation for each interaction. The sequences are clustered so that each cluster stands for a type of behavior. The sequences of a cluster can then be used to calculate n-gram probabilities that enable the generation of new sequences of the captured behavior. The specific output value is sampled from the range corresponding to the generated symbol. With this approach, accommodation behaviors are extracted directly from data, as opposed to manually crafting them. However, it is harder to describe what exactly these behaviors represent and motivate the use of one of them over the other. To bridge this gap between these two approaches, it is also discussed how they can be combined to benefit from the advantages of both. Furthermore, to generate more structured behaviors, a hierarchy of accommodation complexity levels is suggested here, from a direct adoption of users’ realizations, via specified responsiveness, and up to independent core behaviors with non-deterministic variational productions. Besides a way to track and represent vocal changes, an accommodative system also needs a text-to-speech component that is able to realize those changes in the system’s speech output. Speech synthesis models are typically trained once on data with certain characteristics and do not change afterward. This prevents such models from introducing any variation in specific sounds and other phonetic features. Two methods for directly modifying such features are explored here. The first is based on signal modifications applied to the output signal after it was generated by the system. The processing is done between the timestamps of the target features and uses pre-defined scripts that modify the signal to achieve the desired values. This method is more suitable for continuous features like vowel quality, especially in the case of subtle changes that do not necessarily lead to a categorical sound change. The second method aims to capture phonetic variations in the training data. To that end, a training corpus with phonemic representations is used, as opposed to the regular graphemic representations. This way, the model can learn more direct relations between phonemes and sound instead of surface forms and sound, which, depending on the language, might be more complex and depend on their surrounding letters. The target variations themselves don’t necessarily need to be explicitly present in the training data, all time the different sounds are naturally distinguishable. In generation time, the current target feature’s state determines the phoneme to use for generating the desired sound. This method is suitable for categorical changes, especially for contrasts that naturally exist in the language. While both methods have certain limitations, they provide a proof of concept for the idea that spoken dialogue systems may phonetically adapt their speech output in real-time and without re-training their text-to-speech models. To combine the behavior definitions and the speech manipulations, a system is required, which can connect these elements to create a complete accommodation capability. The architecture suggested here extends the standard spoken dialogue system with an additional module, which receives the transcribed speech signal from the speech recognition component without influencing the input to the language understanding component. While language the understanding component uses only textual transcription to determine the user’s intention, the added component process the raw signal along with its phonetic transcription. In this extended architecture, the accommodation model is activated in the added module and the information required for speech manipulation is sent to the text-to-speech component. However, the text-to-speech component now has two inputs, viz. the content of the system’s response coming from the language generation component and the states of the defined target features from the added component. An implementation of a web-based system with this architecture is introduced here, and its functionality is showcased by demonstrating how it can be used to conduct a shadowing experiment automatically. This has two main advantage: First, since the system recognizes the participants’ phonetic variations and automatically selects the appropriate variation to use in its response, the experimenter saves time and prevents manual annotation errors. The experimenter also automatically gains additional information, like exact timestamps of utterances, real-time visualization of the interlocutors’ productions, and the possibility to replay and analyze the interaction after the experiment is finished. The second advantage is scalability. Multiple instances of the system can run on a server and be accessed by multiple clients at the same time. This not only saves time and the logistics of bringing participants into a lab, but also allows running the experiment with different configurations (e.g., other parameter values or target features) in a controlled and reproducible way. This completes a full cycle from examining human behaviors to integrating accommodation capabilities. Though each part of it can undoubtedly be further investigated, the emphasis here is on how they depend and connect to each other. Measuring changes features without showing how they can be modeled or achieving flexible speech synthesis without considering the desired final output might not lead to the final goal of introducing accommodation capabilities into computers. Treating accommodation in human-computer interaction as one large process rather than isolated sub-problems lays the ground for more comprehensive and complete solutions in the future.Heutzutage wird die verbale Interaktion mit Computern immer gebräuchlicher, was der rasant wachsenden Anzahl von sprachaktivierten Geräten weltweit geschuldet ist. Allerdings stellt die computerseitige Handhabung gesprochener Sprache weiterhin eine große Herausforderung dar, obwohl sie die bevorzugte Art zwischenmenschlicher Kommunikation repräsentiert. Dieser Umstand führt auch dazu, dass Benutzer ihren Sprachstil an das jeweilige Gerät anpassen, um diese Handhabung zu erleichtern. Solche Anpassungen kommen in menschlicher gesprochener Sprache auch in der zwischenmenschlichen Kommunikation vor. Üblicherweise ereignen sie sich unbewusst und auf natürliche Weise während eines Gesprächs, etwa um die soziale Distanz zwischen den Gesprächsteilnehmern zu kontrollieren oder um die Effizienz des Gesprächs zu verbessern. Dieses Phänomen wird als Akkommodation bezeichnet und findet auf verschiedene Weise während menschlicher Kommunikation statt. Sie äußert sich zum Beispiel in der Gestik, Mimik, Blickrichtung oder aber auch in der Wortwahl und dem verwendeten Satzbau. Vokal- Akkommodation beschäftigt sich mit derartigen Anpassungen auf phonetischer Ebene, die sich in segmentalen und suprasegmentalen Merkmalen zeigen. Werden Ausprägungen dieser Merkmale bei den Gesprächsteilnehmern im Laufe des Gesprächs ähnlicher, spricht man von Konvergenz, vergrößern sich allerdings die Unterschiede, so wird dies als Divergenz bezeichnet. Dieser natürliche gegenseitige Anpassungsvorgang fehlt jedoch auf der Seite des Computers, was zu einer Lücke in der Mensch-Maschine-Interaktion führt. Darüber hinaus verwenden sprachaktivierte Systeme immer dieselbe Sprachausgabe und ignorieren folglich etwaige Unterschiede zum Sprachstil des momentanen Benutzers. Die Erkennung dieser phonetischen Abweichungen und die Erstellung von anpassungsfähiger Sprachausgabe würden zur Personalisierung dieser Systeme beitragen und könnten letztendlich die insgesamte Benutzererfahrung verbessern. Aus diesem Grund kann die Erforschung dieser Aspekte von Akkommodation helfen, Mensch-Maschine-Interaktion besser zu verstehen und weiterzuentwickeln. Die vorliegende Dissertation stellt einen umfassenden Überblick zu Bausteinen bereit, die nötig sind, um Akkommodationsfähigkeiten in Sprachdialogsysteme zu integrieren. In diesem Zusammenhang wurden auch interaktive Mensch-Mensch- und Mensch- Maschine-Experimente durchgeführt. In diesen Experimenten wurden Differenzen der vokalen Verhaltensweisen untersucht und Methoden erforscht, wie phonetische Abweichungen in synthetische Sprachausgabe integriert werden können. Um die erhaltenen Ergebnisse empirisch auswerten zu können, wurden hierbei auch verschiedene Modellierungsansätze erforscht. Fernerhin wurde der Frage nachgegangen, wie sich die betreffenden Komponenten kombinieren lassen, um ein Akkommodationssystem zu konstruieren. Jeder dieser Aspekte stellt für sich genommen bereits einen überaus breiten Forschungsbereich dar. Allerdings sind sie voneinander abhängig und sollten zusammen betrachtet werden. Aus diesem Grund liegt ein übergreifender Schwerpunkt dieser Dissertation darauf, nicht nur aufzuzeigen, wie sich diese Aspekte weiterentwickeln lassen, sondern auch zu motivieren, wie sie zusammenhängen. Ein weiterer Schwerpunkt dieser Arbeit befasst sich mit der zeitlichen Komponente des Akkommodationsprozesses, was auf der Beobachtung fußt, dass Menschen im Laufe eines Gesprächs ständig ihren Sprachstil ändern. Diese Beobachtung legt nahe, derartige Prozesse als kontinuierliche und dynamische Prozesse anzusehen. Fasst man jedoch diesen Prozess als diskret auf und betrachtet z.B. nur den Beginn und das Ende einer Interaktion, kann dies dazu führen, dass viele Akkommodationsereignisse unentdeckt bleiben oder übermäßig geglättet werden. Um die Entwicklung eines vokalen Akkommodationssystems zu rechtfertigen, muss zuerst bewiesen werden, dass Menschen bei der vokalen Interaktion mit einem Computer ein ähnliches Anpassungsverhalten zeigen wie bei der Interaktion mit einem Menschen. Da es keine eindeutig festgelegte Metrik für das Messen des Akkommodationsgrades und für die Evaluierung der Akkommodationsqualität gibt, ist es besonders wichtig, die Sprachproduktion von Menschen empirisch zu untersuchen, um sie als Referenz für mögliche Verhaltensweisen anzuwenden. In dieser Arbeit schließt diese Untersuchung verschiedene experimentelle Anordnungen ein, um einen besseren Überblick über Akkommodationseffekte zu erhalten. In einer ersten Studie wurde die vokale Akkommodation in einer Umgebung untersucht, in der sie natürlich vorkommt: in einem spontanen Mensch-Mensch Gespräch. Zu diesem Zweck wurde eine Sammlung von echten Verkaufsgesprächen gesammelt und analysiert, wobei in jedem dieser Gespräche ein anderes Handelsvertreter-Neukunde Paar teilgenommen hatte. Diese Gespräche verschaffen einen Einblick in Akkommodationseffekte während spontanen authentischen Interaktionen, wobei die Gesprächsteilnehmer zwei Ziele verfolgen: zum einen soll ein Geschäft verhandelt werden, zum anderen möchte aber jeder Teilnehmer für sich die besten Bedingungen aushandeln. Die Konversationen wurde durch das Kreuzkorrelation-Zeitreihen-Verfahren analysiert, um die dynamischen Änderungen im Zeitverlauf zu erfassen. Hierbei kam zum Vorschein, dass sich erfolgreiche Konversationen von fehlgeschlagenen Gesprächen deutlich unterscheiden lassen. Überdies wurde festgestellt, dass die Handelsvertreter die treibende Kraft von vokalen Änderungen sind, d.h. sie können die Neukunden eher dazu zu bringen, ihren Sprachstil anzupassen, als andersherum. Es wurde auch beobachtet, dass sie diese Akkommodation oft schon zu einem frühen Zeitpunkt auslösen, was besonders bei erfolgreichen Gesprächen beobachtet werden konnte. Dass diese Akkommodation stärker bei trainierten Sprechern ausgelöst wird, deckt sich mit den meist anekdotischen Empfehlungen von erfahrenen Handelsvertretern, die bisher nie wissenschaftlich nachgewiesen worden sind. Basierend auf diesen Ergebnissen beschäfti
    corecore