425 research outputs found

    Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking

    Get PDF
    This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications

    [How] Can Pluralist Approaches to Computational Cognitive Modeling of Human Needs and Values Save our Democracies?

    Get PDF
    In our increasingly digital societies, many companies have business models that perceive users’ (or customers’) personal data as a siloed resource, owned and controlled by the data controller rather than the data subjects. Collecting and processing such a massive amount of personal data could have many negative technical, social and economic consequences, including invading people’s privacy and autonomy. As a result, regulations such as the European General Data Protection Regulation (GDPR) have tried to take steps towards a better implementation of the right to digital privacy. This paper proposes that such legal acts should be accompanied by the development of complementary technical solutions such as Cognitive Personal Assistant Systems to support people to effectively manage their personal data processing on the Internet. Considering the importance and sensitivity of personal data processing, such assistant systems should not only consider their owner’s needs and values, but also be transparent, accountable and controllable. Pluralist approaches in computational cognitive modelling of human needs and values which are not bound to traditional paradigmatic borders such as cognitivism, connectionism, or enactivism, we argue, can create a balance between practicality and usefulness, on the one hand, and transparency, accountability, and controllability, on the other, while supporting and empowering humans in the digital world. Considering the threat to digital privacy as significant to contemporary democracies, the future implementation of such pluralist models could contribute to power-balance, fairness and inclusion in our societies

    Knowledge-based vision and simple visual machines

    Get PDF
    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong

    ECONOMIC AGENCY THROUGH MODULARITY THEORY

    Get PDF
    Economic agency as a matter of rational decision-making and as a problem of bounded rationality has never gone too far from its earlier formalization in the 1950s. Not that the advancement on this topic is so slow, but the same problem concerning higher level cognition as another general program of cognitive science is not as easy as behavioral studies. This paper will show a parallelism between economic agency and folkpsychological perspective, and in turn will give a short description on how folk psychology is unseparable from modularity theory. In short, then there must be a way to cope with cognition as the black box of economics if we can identify the appropriate level of description of cognitive structure, i.e.: modularity theory.bounded rationality, folk psychology, modularity theory

    Evolutionary Robotics: Exploiting the Full Power of Self-organization

    Get PDF

    Neuroethology, Computational

    No full text
    Over the past decade, a number of neural network researchers have used the term computational neuroethology to describe a specific approach to neuroethology. Neuroethology is the study of the neural mechanisms underlying the generation of behavior in animals, and hence it lies at the intersection of neuroscience (the study of nervous systems) and ethology (the study of animal behavior); for an introduction to neuroethology, see Simmons and Young (1999). The definition of computational neuroethology is very similar, but is not quite so dependent on studying animals: animals just happen to be biological autonomous agents. But there are also non-biological autonomous agents such as some types of robots, and some types of simulated embodied agents operating in virtual worlds. In this context, autonomous agents are self-governing entities capable of operating (i.e., coordinating perception and action) for extended periods of time in environments that are complex, uncertain, and dynamic. Thus, computational neuroethology can be characterised as the attempt to analyze the computational principles underlying the generation of behavior in animals and in artificial autonomous agents

    Flexible couplings: diffusing neuromodulators and adaptive robotics

    Get PDF
    Recent years have seen the discovery of freely diffusing gaseous neurotransmitters, such as nitric oxide (NO), in biological nervous systems. A type of artificial neural network (ANN) inspired by such gaseous signaling, the GasNet, has previously been shown to be more evolvable than traditional ANNs when used as an artificial nervous system in an evolutionary robotics setting, where evolvability means consistent speed to very good solutions¿here, appropriate sensorimotor behavior-generating systems. We present two new versions of the GasNet, which take further inspiration from the properties of neuronal gaseous signaling. The plexus model is inspired by the extraordinary NO-producing cortical plexus structure of neural fibers and the properties of the diffusing NO signal it generates. The receptor model is inspired by the mediating action of neurotransmitter receptors. Both models are shown to significantly further improve evolvability. We describe a series of analyses suggesting that the reasons for the increase in evolvability are related to the flexible loose coupling of distinct signaling mechanisms, one ¿chemical¿ and one ¿electrical.

    Towards a Theory Grounded Theory of Language

    Get PDF
    In this paper, we build upon the idea of theory grounding and propose one specific form of theory grounding, a theory of language. Theory grounding is the idea that we can imbue our embodied artificially intelligent systems with theories by modeling the way humans, and specifically young children, develop skills with theories. Modeling theory development promises to increase the conceptual and behavioral flexibility of these systems. An example of theory development in children is the social understanding referred to as “theory of mind.” Language is a natural task for theory grounding because it is vital in symbolic skills and apparently necessary in developing theories. Word learning, and specifically developing a concept of words, is proposed as the first step in a theory grounded theory of language

    A modular architecture for transparent computation in recurrent neural networks

    Get PDF
    publisher: Elsevier articletitle: A modular architecture for transparent computation in recurrent neural networks journaltitle: Neural Networks articlelink: http://dx.doi.org/10.1016/j.neunet.2016.09.001 content_type: article copyright: © 2016 Elsevier Ltd. All rights reserved
    corecore