736 research outputs found

    Merging multi-modal information and cross-modal learning in artificial cognitive systems

    Get PDF
    Čezmodalno povezovanje je združevanje dveh ali več modalnih predstavitev lastnosti neke entitete v skupno predstavitev. Gre za eno temeljnih lastnosti spoznavnih sistemov, ki delujejo v kompleksnem okolju. Da bi se spoznavni sistemi uspešno prilagajali spremembam v dinamičnem okolju, je potrebno mehanizem čezmodalnega povezovanja nadgraditi s čezmodalnim učenjem. Morebiti še najtežja naloga pa je integracija obeh mehanizmov v spoznavni sistem. Njuna vloga v takem sistemu je dvojna: premoščanje semantičnih vrzeli med modalnostmi ter mediacija med nižjenivojskimi mehanizmi za obelavo senzorskih podatkov in višjenivojskimi spoznavnimi procesi, kot sta npr. motivacija in načrtovanje. V magistrski nalogi predstavljamo pristop k modeliranju verjetnostnega večmodalnega združevanja informacij v spoznavnih sistemih. S pomočjo mar-kov-skih logičnih omrežij formuliramo model čezmodalnega povezovanja in učenja ter opišemo načela njegovega vključevanja v spoznavne arhitekture. Prototip modela smo ovrednotili samostojno, z eksperimenti, ki simulirajo trimodalno spoznavno arhitekturo. Na podlagi našega pristopa oblikujemo, implementiramo in integriramo tudi podsistem prepričanj, ki premošča semantični prepad v prototipu spoznavnega sistema George. George je inteligenten robot, ki je sposoben zaznavanja in prepoznavanja predmetov iz okolice ter učenja njihovih lastnosti s pomočjo pogovora s človekom. Njegov poglavitni namen je preizkus različnih paradigem o interaktivnemu učenju konceptov. V ta namen smo izdelali in izvedli interaktivne eksperimente za vrednotenje Georgevih vedenjskih mehanizmov. S temi eksperimenti smo naš pristop k večmodalnemu združevanju informacij preizkusili in ovrednotili tudi kot del delujočega spoznavnega sistema.Cross-modal binding is the ability to merge two or more modal representations of the same entity into a single shared representation. This ability is one of the fundamental properties of any cognitive system operating in a complex environment. In order to adapt successfully to changes in a dynamic environment the binding mechanism has to be supplemented with cross-modal learning. But perhaps the most difficult task is the integration of both mechanisms into a cognitive system. Their role in such a system is two-fold: to bridge the semantic gap between modalities, and to mediate between the lower-level mechanisms for processing the sensory data, and the higher-level cognitive processes, such as motivation and planning. In this master thesis, we present an approach to probabilistic merging of multi-modal information in cognitive systems. By this approach, we formulate a model of binding and cross-modal learning in Markov logic networks, and describe the principles of its integration into a cognitive architecture. We implement a prototype of the model and evaluate it with off-line experiments that simulate a cognitive architecture with three modalities. Based on our approach, we design, implement and integrate the belief layer -- a subsystem that bridges the semantic gap in a prototype cognitive system named George. George is an intelligent robot that is able to detect and recognise objects in its surroundings, and learn about their properties in a situated dialogue with a human tutor. Its main purpose is to validate various paradigms of interactive learning. To this end, we have developed and performed on-line experiments that evaluate the mechanisms of robot\u27s behaviour. With these experiments, we were also able to test and evaluate our approach to merging multi-modal information as part of a functional cognitive system

    Visible and Invisible Bias via Media

    Get PDF

    Modeling the Synchronization of Multimodal Perceptions as a Basis for the Emergence of Deterministic Behaviors.

    Get PDF
    Living organisms have either innate or acquired mechanisms for reacting to percepts with an appropriate behavior e.g., by escaping from the source of a perception detected as threat, or conversely by approaching a target perceived as potential food. In the case of artifacts, such capabilities must be built in through either wired connections or software. The problem addressed here is to define a neural basis for such behaviors to be possibly learned by bio-inspired artifacts. Toward this end, a thought experiment involving an autonomous vehicle is first simulated as a random search. The stochastic decision tree that drives this behavior is then transformed into a plastic neuronal circuit. This leads the vehicle to adopt a deterministic behavior by learning and applying a causality rule just as a conscious human driver would do. From there, a principle of using synchronized multimodal perceptions in association with the Hebb principle of wiring together neuronal cells is induced. This overall framework is implemented as a virtual machine i.e., a concept widely used in software engineering. It is argued that such an interface situated at a meso-scale level between abstracted micro-circuits representing synaptic plasticity, on one hand, and that of the emergence of behaviors, on the other, allows for a strict delineation of successive levels of complexity. More specifically, isolating levels allows for simulating yet unknown processes of cognition independently of their underlying neurological grounding

    Perceptual Characterization: On Perceptual Learning and Perspectival Sedimentation

    Get PDF
    In her analysis of perspectival effects on perception, Susanna Siegel has argued that perceptual experience is directly rationally assessable and can thereby justify perceptual beliefs, save for in cases of epistemic downgrade or perceptual hijacking; I contend that the recalcitrance of known illusions poses an insurmountable problem for Siegel’s thesis. In its place, I argue that a model of perceptual learning informed by the dual-aspect framework of base-level cognitive architecture proposed by Elisabeth Camp successfully answers the questions motivating Siegel’s project in a manner that avoids such issues

    Cognitive Set Theory

    Get PDF
    Cognitive Set Theory is a mathematical model of cognition which equates sets with concepts, and uses mereological elements. It has a holistic emphasis, as opposed to a reductionistic emphasis, and it therefore begins with a single universe (as opposed to an infinite collection of infinitesimal points)

    Linking somatic and symbolic representation in semantic memory: the dynamic multilevel reactivation framework

    Get PDF
    Biological plausibility is an essential constraint for any viable model of semantic memory. Yet, we have only the most rudimentary understanding of how the human brain conducts abstract symbolic transformations that underlie word and object meaning. Neuroscience has evolved a sophisticated arsenal of techniques for elucidating the architecture of conceptual representation. Nevertheless, theoretical convergence remains elusive. Here we describe several contrastive approaches to the organization of semantic knowledge, and in turn we offer our own perspective on two recurring questions in semantic memory research: (1) to what extent are conceptual representations mediated by sensorimotor knowledge (i.e., to what degree is semantic memory embodied)? (2) How might an embodied semantic system represent abstract concepts such as modularity, symbol, or proposition? To address these questions, we review the merits of sensorimotor (i.e., embodied) and amodal (i.e., disembodied) semantic theories and address the neurobiological constraints underlying each. We conclude that the shortcomings of both perspectives in their extreme forms necessitate a hybrid middle ground. We accordingly propose the Dynamic Multilevel Reactivation Framework—an integrative model predicated upon flexible interplay between sensorimotor and amodal symbolic representations mediated by multiple cortical hubs. We discuss applications of the dynamic multilevel reactivation framework to abstract and concrete concept representation and describe how a multidimensional conceptual topography based on emotion, sensation, and magnitude can successfully frame a semantic space containing meanings for both abstract and concrete words. The consideration of ‘abstract conceptual features’ does not diminish the role of logical and/or executive processing in activating, manipulating and using information stored in conceptual representations. Rather, it proposes that the materials upon which these processes operate necessarily combine pure sensorimotor information and higher-order cognitive dimensions involved in symbolic representation
    corecore