595 research outputs found

    Evolution of Grasping Behaviour in Anthropomorphic Robotic Arms with Embodied Neural Controllers

    Get PDF
    The works reported in this thesis focus upon synthesising neural controllers for anthropomorphic robots that are able to manipulate objects through an automatic design process based on artificial evolution. The use of Evolutionary Robotics makes it possible to reduce the characteristics and parameters specified by the designer to a minimum, and the robot’s skills evolve as it interacts with the environment. The primary objective of these experiments is to investigate whether neural controllers that are regulating the state of the motors on the basis of the current and previously experienced sensors (i.e. without relying on an inverse model) can enable the robots to solve such complex tasks. Another objective of these experiments is to investigate whether the Evolutionary Robotics approach can be successfully applied to scenarios that are significantly more complex than those to which it is typically applied (in terms of the complexity of the robot’s morphology, the size of the neural controller, and the complexity of the task). The obtained results indicate that skills such as reaching, grasping, and discriminating among objects can be accomplished without the need to learn precise inverse internal models of the arm/hand structure. This would also support the hypothesis that the human central nervous system (cns) does necessarily have internal models of the limbs (not excluding the fact that it might possess such models for other purposes), but can act by shifting the equilibrium points/cycles of the underlying musculoskeletal system. Consequently, the resulting controllers of such fundamental skills would be less complex. Thus, the learning of more complex behaviours will be easier to design because the underlying controller of the arm/hand structure is less complex. Moreover, the obtained results also show how evolved robots exploit sensory-motor coordination in order to accomplish their tasks

    Children’s Reading of Sublexical Units in Years Three to Five: A Combined Analysis of Eye-Movements and Voice Recording

    Get PDF
    Purpose Children progress from making grapheme–phoneme connections to making grapho-syllabic connections before whole-word connections during reading development (Ehri, 2005a). More is known about the development of grapheme–phoneme connections than is known about grapho-syllabic connections. Therefore, we explored the trajectory of syllable use in English developing readers during oral reading. Method Fifty-one English-speaking children (mean age: 8.9 years, 55% females, 88% monolinguals) in year groups three, four, and five read aloud sentences with an embedded target word, while their eye movements and voices were recorded. The targets contained six letters and were either one or two syllables. Result Children in grade five had shorter gaze duration, shorter articulation duration, and larger spatial eye-voice span (EVS) than children in grade four. Children in grades three and four did not significantly differ on these measures. A syllable number effect was found for gaze duration but not for articulation duration and spatial EVS. Interestingly, one-syllable words took longer to process compared to two-syllable words, suggesting that more syllables may not always signify greater processing difficulty. Conclusion Overall, children are sensitive to sublexical reading units; however, due to sample and stimuli limitations, these findings should be interpreted with caution and further research conducted

    The Role of Prosodic Stress and Speech Perturbation on the Temporal Synchronization of Speech and Deictic Gestures

    Get PDF
    Gestures and speech converge during spoken language production. Although the temporal relationship of gestures and speech is thought to depend upon factors such as prosodic stress and word onset, the effects of controlled alterations in the speech signal upon the degree of synchrony between manual gestures and speech is uncertain. Thus, the precise nature of the interactive mechanism of speech-gesture production, or lack thereof, is not agreed upon or even frequently postulated. In Experiment 1, syllable position and contrastive stress were manipulated during sentence production to investigate the synchronization of speech and pointing gestures. An additional aim of Experiment 2 was to investigate the temporal relationship of speech and pointing gestures when speech is perturbed with delayed auditory feedback (DAF). Comparisons between the time of gesture apex and vowel midpoint (GA-VM) for each of the conditions were made for both Experiment 1 and Experiment 2. Additional comparisons of the interval between gesture launch midpoint to vowel midpoint (GLM-VM), total gesture time, gesture launch time, and gesture return time were made for Experiment 2. The results for the first experiment indicated that gestures were more synchronized with first position syllables and neutral syllables as measured GA-VM intervals. The first position syllable effect was also found in the second experiment. However, the results from Experiment 2 supported an effect of contrastive pitch effect. GLM-VM was shorter for first position targets and accented syllables. In addition, gesture launch times and total gesture times were longer for contrastive pitch accented syllables, especially when in the second position of words. Contrary to the predictions, significantly longer GA-VM and GLM-VM intervals were observed when individuals responded under provided delayed auditory feedback (DAF). Vowel and sentence durations increased both with (DAF) and when a contrastive accented syllable was produced. Vowels were longest for accented, second position syllables. These findings provide evidence that the timing of gesture is adjusted based upon manipulations of the speech stream. A potential mechanism of entrainment of the speech and gesture system is offered as an explanation for the observed effects

    Simulation Tools for the Study of the Interaction between Communication and Action in Cognitive Robots

    Get PDF
    In this thesis I report the development of FARSA (Framework for Autonomous Robotics Simulation and Analysis), a simulation tool for the study of the interaction between language and action in cognitive robots and more in general for experiments in embodied cognitive science. Before presenting the tools, I will describe a series of experiments that involve simulated humanoid robots that acquire their behavioural and language skills autonomously through a trial-and-error adaptive process in which random variations of the free parameters of the robots’ controller are retained or discarded on the basis of their effect on the overall behaviour exhibited by the robot in interaction with the environment. More specifically the first series of experiments shows how the availability of linguistic stimuli provided by a caretaker, that indicate the elementary actions that need to be carried out in order to accomplish a certain complex action, facilitates the acquisition of the required behavioural capacity. The second series of experiments shows how a robot trained to comprehend a set of command phrases by executing the corresponding appropriate behaviour can generalize its knowledge by comprehending new, never experienced sentences, and by producing new appropriate actions. Together with their scientific relevance, these experiments provide a series of requirements that have been taken into account during the development of FARSA. The objective of this project is that to reduce the complexity barrier that currently discourages part of the researchers interested in the study of behaviour and cognition from initiating experimental activity in this area. FARSA is the only available tools that provide an integrated framework for carrying on experiments of this type, i.e. it is the only tool that provides ready to use integrated components that enable to define the characteristics of the robots and of the environment, the characteristics of the robots’ controller, and the characteristics of the adaptive process. Overall this enables users to quickly setup experiments, including complex experiments, and to quickly start collecting results

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling

    Phonological and orthographic processing in deaf readers during recognition of written and fingerspelled words in Spanish and English

    Get PDF
    The role of phonological and orthographic access during word recognition, as well as its developmental trajectory in deaf readers is still a matter of debate. This thesis examined how phonological and orthographic information is used during written and fingerspelled word recognition by three groups of deaf readers: 1) adult readers of English, 2) adult and 3) young readers of Spanish. I also investigated whether the size of the orthographic and phonological effects was related to reading skill and other related variables: vocabulary, phonological awareness, speechreading and fingerspelling abilities. A sandwich masked priming paradigm was used to assess automatic phonological (pseudohomophone priming; Experiments 1-3) and orthographic (transposed-letter priming; Experiments 4–6) effects in all groups during recognition of single written words. To examine fingerspelling processing, pseudohomophone (Experiments 7–9) and transposed-letter (Experiments 10-12) effects were examined in lexical decision tasks with fingerspelled video stimuli. Phonological priming effects were found for adult deaf readers of English. Interestingly, for deaf readers of Spanish only those young readers with a small vocabulary size showed phonological priming. Conversely, orthographic masked priming was found in adult deaf readers of English and Spanish as well as young deaf readers with large vocabulary size. Reading ability was only correlated to the orthographic priming effect (in accuracy) in the adult deaf readers of English. Fingerspelled pseudohomophones took longer than control pseudowords to reject as words in the adult deaf readers of English and in the young deaf readers of Spanish with a small vocabulary, suggesting sensitivity to speech phonology in these groups. The findings suggest greater reliance on phonology by less skilled deaf readers of both Spanish and English. Additionally, they suggest greater reliance on phonology during both word and fingerspelling processing in deaf readers of a language with a deeper orthography (English), than by expert readers of a shallow orthography (Spanish)

    A usage-based approach to language processing and intervention in aphasia

    Get PDF
    Non-fluent aphasia (NFA) is characterized by grammatically impoverished language output. Yet there is evidence that a restricted set of multi-word utterances (e.g., “don’t know”) are retained. Analyses of connected speech often dismiss these as stereotypical, however, these high-frequency phrases are an interactional resource in both neurotypical and aphasic discourse. One approach that can account for these forms is usage-based grammar, where linguistic knowledge is thought of as an inventory of constructions, i.e., form-meaning pairings such as familiar collocations (“wait a minute”) and semi-fixed phrases (“I want X”). This approach is used in language development and second language learning research, but its application to aphasiology is currently limited. This thesis applied a usage-based perspective to language processing and intervention in aphasia. Study 1 investigated use of word combinations in conversations of nine participants with Broca’s aphasia (PWA) and their conversation partners (CPs), combining analysis of form (frequency-based approach) and function (interactional linguistics approach). In study 2, an on-line word monitoring task was used to examine whether individuals with aphasia and neurotypical controls showed sensitivity to collocation strength (degree of association between units of a word combination). Finally, the impact of a novel intervention involving loosening of slots in semi-fixed phrases was piloted with five participants with NFA. Study 1 revealed that PWA used stronger collocated word combinations compared to CPs, and familiar collocations are a resource adapted to the constraints of aphasia. Findings from study 2 indicated that words were recognised more rapidly when preceded by strongly collocated words in both neurotypical and aphasic listeners, although effects were stronger for controls. Study 3 resulted in improved connected speech for some participants. Future research is needed to refine outcome measures for connected speech interventions. This thesis suggests that usage-based grammar has potential to explain grammatical behaviour in aphasia, and to inform interventions

    Learning motion primitives and annotative texts from crowd-sourcing

    Get PDF

    Processing long-distance dependencies: an experimental investigation of grammatical illusions in English and Spanish

    Get PDF
    A central concern in the study of sentence comprehension has to do with defining the role that grammatical information plays during the incremental interpretation of language. In order to successfully achieve the complex task of understanding a linguistic message, the language comprehension system (the parser) must – among other things – be able to resolve the wide variety of relations that are established between the different parts of a sentence. These relations are known as linguistic dependencies. Linguistic dependencies are subject to a diverse range of grammatical constraints (e.g. syntactic, morphological, lexical, etc.), and how these constraints are implemented in real-time comprehension is one of the fundamental questions in psycholinguistic research. In this quest, the focus has been often placed on studying the sensitivity that language users exhibit to grammatical contrasts during sentence processing. The grammatical richness with which the parser seems to operate makes it even more interesting when the results of sentence processing do not converge with the constraints of the grammar. Misalignments between grammar and parsing provide a unique window into the principles that guide language comprehension, and their study has generated a fruitful research program

    The role of orthography in auditory word learning

    Get PDF
    174 p.The present thesis investigates the role of orthography in auditory word learning. Across three experiments we explore whether adult speakers of Spanish (Experiments 1 and 3) and French (Experiment 2) use their knowledge of sound-to-spellings mappings to generate preliminary orthographic representations for newly acquired spoken words. Specifically, we first teach skilled readers novel spoken words with only one (i.e., consistent words) or two possible spellings (i.e., inconsistent words). Next, we present them with unique, preferred and unpreferred novel words spellings. By analysing reading times for aurally acquired words seen for the first time in writing, we demonstrate that skilled readers of both transparent (i.e., Spanish) and opaque (i.e., French) writing systems generate preliminary orthographic representations for aurally familiar words. We discuss these results in the light of the existing theories of reading and word learning, and then go on to emphasise the novel contribution of the present work. Finally, we conclude by saying that the results of the present thesis reveal new ways in which learning to read affects spoken language processing.bcbl:basque center on cognition, brain and languag
    corecore