736 research outputs found
Merging multi-modal information and cross-modal learning in artificial cognitive systems
Čezmodalno povezovanje je združevanje dveh ali več modalnih predstavitev lastnosti neke entitete v skupno predstavitev. Gre za eno temeljnih lastnosti spoznavnih sistemov, ki delujejo v kompleksnem okolju. Da bi se spoznavni sistemi uspešno prilagajali spremembam v dinamičnem okolju, je potrebno mehanizem čezmodalnega povezovanja nadgraditi s čezmodalnim učenjem. Morebiti še najtežja naloga pa je integracija obeh mehanizmov v spoznavni sistem. Njuna vloga v takem sistemu je dvojna: premoščanje semantičnih vrzeli med modalnostmi ter mediacija med nižjenivojskimi mehanizmi za obelavo senzorskih podatkov in višjenivojskimi spoznavnimi procesi, kot sta npr. motivacija in načrtovanje.
V magistrski nalogi predstavljamo pristop k modeliranju verjetnostnega večmodalnega združevanja informacij v spoznavnih sistemih. S pomočjo mar-kov-skih logičnih omrežij formuliramo model čezmodalnega povezovanja in učenja ter opišemo načela njegovega vključevanja v spoznavne arhitekture. Prototip modela smo ovrednotili samostojno, z eksperimenti, ki simulirajo trimodalno spoznavno arhitekturo. Na podlagi našega pristopa oblikujemo, implementiramo in integriramo tudi podsistem prepričanj, ki premošča semantični prepad v prototipu spoznavnega sistema George. George je inteligenten robot, ki je sposoben zaznavanja in prepoznavanja predmetov iz okolice ter učenja njihovih lastnosti s pomočjo pogovora s človekom. Njegov poglavitni namen je preizkus različnih paradigem o interaktivnemu učenju konceptov. V ta namen smo izdelali in izvedli interaktivne eksperimente za vrednotenje Georgevih vedenjskih mehanizmov. S temi eksperimenti smo naš pristop k večmodalnemu združevanju informacij preizkusili in ovrednotili tudi kot del delujočega spoznavnega sistema.Cross-modal binding is the ability to merge two or more modal representations of the same entity into a single shared representation. This ability is one of the fundamental properties of any cognitive system operating in a complex environment. In order to adapt successfully to changes in a dynamic environment the binding mechanism has to be supplemented with cross-modal learning. But perhaps the most difficult task is the integration of both mechanisms into a cognitive system. Their role in such a system is two-fold: to bridge the semantic gap between modalities, and to mediate between the lower-level mechanisms for processing the sensory data, and the higher-level cognitive processes, such as motivation and planning.
In this master thesis, we present an approach to probabilistic merging of multi-modal information in cognitive systems. By this approach, we formulate a model of binding and cross-modal learning in Markov logic networks, and describe the principles of its integration into a cognitive architecture. We implement a prototype of the model and evaluate it with off-line experiments that simulate a cognitive architecture with three modalities. Based on our approach, we design, implement and integrate the belief layer -- a subsystem that bridges the semantic gap in a prototype cognitive system named George. George is an intelligent robot that is able to detect and recognise objects in its surroundings, and learn about their properties in a situated dialogue with a human tutor. Its main purpose is to validate various paradigms of interactive learning. To this end, we have developed and performed on-line experiments that evaluate the mechanisms of robot\u27s behaviour. With these experiments, we were also able to test and evaluate our approach to merging multi-modal information as part of a functional cognitive system
Modeling the Synchronization of Multimodal Perceptions as a Basis for the Emergence of Deterministic Behaviors.
Living organisms have either innate or acquired mechanisms for reacting to percepts with an appropriate behavior e.g., by escaping from the source of a perception detected as threat, or conversely by approaching a target perceived as potential food. In the case of artifacts, such capabilities must be built in through either wired connections or software. The problem addressed here is to define a neural basis for such behaviors to be possibly learned by bio-inspired artifacts. Toward this end, a thought experiment involving an autonomous vehicle is first simulated as a random search. The stochastic decision tree that drives this behavior is then transformed into a plastic neuronal circuit. This leads the vehicle to adopt a deterministic behavior by learning and applying a causality rule just as a conscious human driver would do. From there, a principle of using synchronized multimodal perceptions in association with the Hebb principle of wiring together neuronal cells is induced. This overall framework is implemented as a virtual machine i.e., a concept widely used in software engineering. It is argued that such an interface situated at a meso-scale level between abstracted micro-circuits representing synaptic plasticity, on one hand, and that of the emergence of behaviors, on the other, allows for a strict delineation of successive levels of complexity. More specifically, isolating levels allows for simulating yet unknown processes of cognition independently of their underlying neurological grounding
Perceptual Characterization: On Perceptual Learning and Perspectival Sedimentation
In her analysis of perspectival effects on perception, Susanna Siegel has argued that perceptual experience is directly rationally assessable and can thereby justify perceptual beliefs, save for in cases of epistemic downgrade or perceptual hijacking; I contend that the recalcitrance of known illusions poses an insurmountable problem for Siegel’s thesis. In its place, I argue that a model of perceptual learning informed by the dual-aspect framework of base-level cognitive architecture proposed by Elisabeth Camp successfully answers the questions motivating Siegel’s project in a manner that avoids such issues
Cognitive Set Theory
Cognitive Set Theory is a mathematical model of cognition which equates sets with concepts, and uses mereological elements. It has a holistic emphasis, as opposed to a reductionistic emphasis, and it therefore begins with a single universe (as opposed to an infinite collection of infinitesimal points)
Recommended from our members
Mental and motor representation for music performance
This research proposes a theory of nonconscious motor representation which precedes mental representation of the outcome of motor actions in music performance. The music performer faces the problem of how to escape sedimented musical paradigms to produce novel configurations of dynamics, timing and tone colour. If the sound were mentally represented as an action goal prior to being produced, it would tend to be assimilated to a known action goal. The proposed theory is intended to account for creativity in music performance, but has implications in other areas for both creativity and motor actions.
The investigation began with an ethnographic study of two 'posthardcore' rock bands in London and Bristol. Posthardcore musicians work with minimal explicit knowledge of music theory and cognitive involvement in performance is actively eschewed. Serendipitous musical felicities in performance are valued. Such felicities depend on adjustment and fine control of dynamics, timing and tone colour within the parameters of the given.
A selective survey of music aesthetics shows that the defining qualities of music are the production of immanent rather than representational meaning; polysemy; and processuality. Taking an analytic philosophy and cognitive science approach, I argue that apprehensions of immanent meaning depend on relationships between proximal percepts within the specious present. A general argument for nonconceptual perceptual content as perception of relations between magnitudes within the specious present is extended to music and argued to account for both the polysemic richness of music and its processuality. Nonconceptual relational perception can account for novel apprehensions by music listeners, but not for the production of novel configurations by the performer. I argue that motor creativity in music performance is achieved through the nonconscious parameterization of inverse models without conscious representation of the goal of the action. Conscious representation for the performer occurs when they hear their own performance
Linking somatic and symbolic representation in semantic memory: the dynamic multilevel reactivation framework
Biological plausibility is an essential constraint for any viable model of semantic memory. Yet, we have only the most rudimentary understanding of how the human brain conducts abstract symbolic transformations that underlie word and object meaning. Neuroscience has evolved a sophisticated arsenal of techniques for elucidating the architecture of conceptual representation. Nevertheless, theoretical convergence remains elusive. Here we describe several contrastive approaches to the organization of semantic knowledge, and in turn we offer our own perspective on two recurring questions in semantic memory research: (1) to what extent are conceptual representations mediated by sensorimotor knowledge (i.e., to what degree is semantic memory embodied)? (2) How might an embodied semantic system represent abstract concepts such as modularity, symbol, or proposition? To address these questions, we review the merits of sensorimotor (i.e., embodied) and amodal (i.e., disembodied) semantic theories and address the neurobiological constraints underlying each. We conclude that the shortcomings of both perspectives in their extreme forms necessitate a hybrid middle ground. We accordingly propose the Dynamic Multilevel Reactivation Framework—an integrative model predicated upon flexible interplay between sensorimotor and amodal symbolic representations mediated by multiple cortical hubs. We discuss applications of the dynamic multilevel reactivation framework to abstract and concrete concept representation and describe how a multidimensional conceptual topography based on emotion, sensation, and magnitude can successfully frame a semantic space containing meanings for both abstract and concrete words. The consideration of ‘abstract conceptual features’ does not diminish the role of logical and/or executive processing in activating, manipulating and using information stored in conceptual representations. Rather, it proposes that the materials upon which these processes operate necessarily combine pure sensorimotor information and higher-order cognitive dimensions involved in symbolic representation
- …