1,028 research outputs found

    ExMaps: Long-Term Localization in Dynamic Scenes using Exponential Decay

    Get PDF

    Neurocomputational Principles of Action Understanding: Perceptual Inference, Predictive Coding, and Embodied Simulation

    Get PDF
    The social alignment of the human mind is omnipresent in our everyday life and culture. Yet, what mechanisms of the brain allow humans to be social, and how do they work and interact? Despite the apparent importance of this question, the nexus of cognitive processes underlying social intelligence is still largely unknown. A system of mirror neurons has been under deep, interdisciplinary consideration over recent years, and farreaching contributions to social cognition have been suggested, including understanding others' actions, intentions, and emotions. Theories of embodied cognition emphasize that our minds develop by processing and inferring structures given the encountered bodily experiences. It has been suggested that also action understanding is possible by simulating others' actions by means of the own embodied representations. Nonetheless, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto principally embodied states like intentions and motor representations, and which processes foster suitable simulations thereof. Seeing that our minds are generative and predictive in nature, and that cognition is elementally anticipatory, also principles of predictive coding have been suggested to be involved in action understanding. This thesis puts forward a unifying hypothesis of embodied simulation, predictive coding, and perceptual inferences, and supports it with a neural network model. The model (i) learns encodings of embodied, self-centered visual and proprioceptive, modal and submodal perceptions as well as kinematic intentions in separate modules, (ii) learns temporal, recurrent predictions inside and across these modules to foster distributed and consistent simulations of unobservable embodied states, (iii) and applies top-down expectations to drive perceptual inferences and imagery processes that establish the correspondence between action observations and the unfolding, simulated self-representations. All components of the network are evaluated separately and in complete scenarios on motion capture data of human subjects. In the results, I show that the model becomes capable of simulating and reenacting observed actions based on its embodied experience, leading to action understanding in terms of motor preparations and inference of kinematic intentions. Furthermore, I show that perceptual inferences by means of perspective-taking and feature binding can establish the correspondence between self and other and might thus be deeply anchored in action understanding and other abilities attributed to the mirror neuron system. In conclusion, the model shows that it is indeed possible to develop embodied, neurocomputational models of the alleged principles of social cognition, providing support for the above hypotheses and opportunities for further investigations.Die soziale Orientierung des menschlichen Geistes ist in unserem Alltag sowie unserer Kultur allgegenwärtig. Welche Vorgänge im Gehirn führen jedoch dazu, und wie funktionieren und interagieren sie? Trotz des offensichtlichen Gewichts dieser Fragestellung sind die der sozialen Intelligenz zugrundeliegenden Zusammenhänge und kognitiven Prozesse weitestgehend ungeklärt. Seit einigen Jahren wird ein als Spiegelneuronensystem benannter neuronaler Komplex umfangreich und interdisziplinär betrachtet. Ihm werden weitreichende Implikationen für die soziale Kognition zugeschrieben, so etwa das Verstehen der Aktionen, Intentionen und Emotionen anderer. Die Theorie der 'Embodied Cognition' betont, dass die verarbeiteten und hergeleiteten Strukturen in unserem Geist erst durch unser Handeln und unsere körperlichen Erfahrungen hervorgebracht werden. So soll auch unser Verständnis anderer dadurch zustande kommen, dass wir ihre Handlungen mittels der durch unseren eigenen Körper erworbenen Erfahrungen simulieren. Es bleibt jedoch zunächst offen, wie etwa visuell wahrgenommene Bewegungen anderer Personen auf grundsätzlich sensomotorisch koordinierte Zustände abgebildet werden, und welche mentalen Prozesse entsprechende Simulationen anstoßen. In Anbetracht der antizipatorischen Natur unseres Geistes wurden auch Prinzipien der prädiktiven Codierung ('Predictive Coding') mit Handlungsverständnis in Zusammenhang gebracht. In dieser Arbeit schlage ich eine kombinierende Hypothese aus 'Embodied Simulation', prädiktiven Codierungen, und perzeptuellen Inferenzen vor, und untermauere diese mithilfe eines neuronalen Modells. Das Modell lernt (i) Codierungen von körperlich kontextualisierten, selbst-bezogenen, visuellen und propriozeptiven, modalen und submodalen Reizen sowohl als auch kinematische Intentionen in separaten Modulen, lernt (ii) zeitliche, rekurrente Vorhersagen innerhalb der Module und modulübergreifend um konsistente Simulation teilweise nicht beobachtbarer, verteilter Zustandssequenzen zu ermöglichen, und wendet (iii) top-down Erwartungen an um perzeptuelle Inferenzen und perspektivische Vorstellungsprozesse anzustoßen, so dass die Korrespondenz von Beobachtungen zu den gelernten Selbstrepräsentationen hergestellt wird. Die Komponenten des Netzwerks werden sowohl einzeln als auch in vollständigen Szenarien anhand von Bewegungsaufzeichnungen menschlicher Versuchspersonen ausgewertet. Die Ergebnisse zeigen, dass das Modell bestimmte Handlungtypen simulieren und unter Zuhilfenahme der eigenen körperlichen Erfahrungen beobachtete Handlungen nachvollziehen kann, indem motorische Resonanzen und intentionale Inferenzen resultieren. Desweiteren zeigen die Auswertungen, das perzeptuelle Inferencen im Sinne von Perspektivübernahme und Merkmalsintegration die Korrespondenz zwischen dem Selbst und Anderen herstellen können, und dass diese Prozesse daher tief in unserem Handlungsverständnis und anderen den Spiegelneuronen zugeschriebenen Fähigkeiten verankert sein können. Schlussfolgernd zeigt das neuronale Netz, dass es in der Tat möglich ist, die vermeintlichen Prinzipien der sozialen Kognition mit einem körperlich grundierten Ansatz zu modellieren, so dass die oben genannten Theorien unterstützt werden und sich neue Gelegenheiten für weitere Untersuchungen ergeben

    Internet of Things for beyond-the-laboratory prosthetics research

    Get PDF
    Research on upper-limb prostheses is typically laboratory-based. Evidence indicates that research has not yet led to prostheses that meet user needs. Inefficient communication loops between users, clinicians and manufacturers limit the amount of quantitative and qualitative data that researchers can use in refining their innovations. This paper offers a first demonstration of an alternative paradigm by which remote, beyond-the-laboratory prosthesis research according to user needs is feasible. Specifically, the proposed Internet of Things setting allows remote data collection, real-time visualization and prosthesis reprogramming through Wi-Fi and a commercial cloud portal. Via a dashboard, the user can adjust the configuration of the device and append contextual information to the prosthetic data. We evaluated this demonstrator in real-time experiments with three able-bodied participants. Results promise the potential of contextual data collection and system update through the internet, which may provide real-life data for algorithm training and reduce the complexity of send-home trials. This article is part of the theme issue ‘Advanced neurotechnologies: translating innovation for health and well-being’

    Spatial and non-spatial deixis in Cushillococha Ticuna

    Get PDF
    This dissertation is a study of the 6-term demonstrative system of Ticuna, a language isolate spoken by 60,000 people in Peru, Colombia, and Brazil.Much research on demonstratives has claimed that they encode only the distance of the demonstrative referent from the discourse participants. By contrast, I argue that no demonstrative of Ticuna conveys any information about distance. Instead, I show, the demonstratives of Ticuna provide listeners with two kinds of information:- Perceptual information: Demonstratives encode whether the speaker sees the demonstrative referent.- Spatial information: Demonstratives encode where the referent is located relative to the peripersonal space (reaching space) of the discourse participants. Location relative to peripersonal space is crucially different from distance.Within the body of the dissertation, Chapters 1 through 3 set the stage for these arguments. Chapter 1 introduces the Ticuna ethnic group, their language, and the language's demonstrative system. Chapter 2 describes the methods used in the study, which range from experimental tasks to recordings of everyday conversation. Chapter 3 lays out the conceptual framework for demonstrative meaning used in the study. This framework draws on research in psychology and anthropology as well as linguistics, recognizing the contribution of multiple disciplines to the study of deixis.Chapters 4, 5, and 6 are the core of the dissertation. In Chapter 4, I demonstrate, from experimental and elicitation data, that 3 of the 5 exophoric demonstratives of Ticuna encode information about the speaker's mode of perception of the referent. Their perceptual deictic content specifically concerns whether the speaker sees the demonstrative referent at the moment of speech. This meaning relates to the sense of vision -- not to more abstract categories like epistemic modality, identifiability, or general direct evidentiality (pace Levinson 2004a, 2018a).In Chapter 5, I examine the apparent speaker-proximal and addressee-proximal demonstratives of Ticuna. From experimental data, I argue that these demonstratives encode spatial information, but not distance. Instead, their spatial deictic content concerns the location of the demonstrative referent relative to the speaker or addressee's peripersonal space. The peripersonal space (Kemmerer 1999) is defined as the space which a person can reach (i.e. perceive via the sense of touch) without moving relative to a ground. Since the peripersonal space is a perceptuo-spatial construct, not a sheerly spatial one, even the 'spatial' content of demonstratives is grounded in perception. Chapter 5 also engages at length with data from maximally informal conversation. In this data, I observe that the speaker- and addressee-proximal demonstratives can also convey non-spatial information about the referent: that the speaker is calling new joint attention to the referent (for the speaker-proximal), that the referent is owned by the addressee (for the addressee-proximal), or that the origo (speaker or addressee) is moving toward the referent (for both proximals). I argue that all of these non-spatial uses of proximals arise from the items' spatial deictic content, via conventional forms of deferred reference and deictic transposition. In Chapter 6, I analyze the language's apparent medial and distal demonstratives, again drawing on both experimental and conversational data. I show that the apparent medial demonstrative of Ticuna is actually a sociocentric proximal, with the sense of 'sociocentric' developed by Hanks (1990). It encodes that the referent is within a perimeter jointly defined by the locations of speaker and addressee. The distal demonstrative, on the other hand, is a true egocentric distal, encoding only that the referent is outside of the speaker's peripersonal space.Chapter 7, defending my analysis of deixis against theories that assimilate deixis to anaphora, argues that the deictic and anaphoric systems of Ticuna are minimally related. I show that the demonstrative system of Ticuna exhibits a complete lexical split between exophoric (deictic) and non-exophoric (anaphoric and recognitional) demonstratives. The two classes of demonstratives are distinct in meaning as well as form. Exophoric demonstratives have the rich spatial and perceptual deictic content described in Chapters 4 through 6; non-exophoric demonstratives, by contrast, convey nothing about the referent except its discourse or world familiarity. Chapter 8 summarizes and concludes

    A technique for the measurement and possible rehabilitation of Visual Neglect using the Leap Sensor

    Get PDF
    Visual Neglect is a common neuropsychological deficit associated with a person having a Stroke [1]. This deficit is manifested by a Stroke patient’s inability to notice things usually in the left hand side of their visual space. This has a serious impact on their daily lives as they fail to notice obstacles while walking or leave half of a meal uneaten because they are unaware of its existence. Currently, pen and paper-based techniques are used to assess the presence of Visual neglect in patients and there have been a number of rehabilitative programs developed to try and ameliorate the symptoms of Visual Neglect with limited success [2]. Using the Leap Sensor, this project sets out to develop a novel measurement paradigm for the detection and diagnosis of visual neglect as well as laying the ground work for developing a novel rehabilitative intervention (a means of helping stroke patients to either recover from visual neglect or make adaptations do lessen the effect of visual neglect). In addition, we replace pen and paper tests with a web based system which enables professionals to complete such assessments of visual neglect virtually and archive their results

    A technique for the measurement and possible rehabilitation of Visual Neglect using the Leap Sensor

    Get PDF
    Visual Neglect is a common neuropsychological deficit associated with a person having a Stroke [1]. This deficit is manifested by a Stroke patient’s inability to notice things usually in the left hand side of their visual space. This has a serious impact on their daily lives as they fail to notice obstacles while walking or leave half of a meal uneaten because they are unaware of its existence. Currently, pen and paper-based techniques are used to assess the presence of Visual neglect in patients and there have been a number of rehabilitative programs developed to try and ameliorate the symptoms of Visual Neglect with limited success [2]. Using the Leap Sensor, this project sets out to develop a novel measurement paradigm for the detection and diagnosis of visual neglect as well as laying the ground work for developing a novel rehabilitative intervention (a means of helping stroke patients to either recover from visual neglect or make adaptations do lessen the effect of visual neglect). In addition, we replace pen and paper tests with a web based system which enables professionals to complete such assessments of visual neglect virtually and archive their results
    corecore