335,729 research outputs found

    The Resilience of Computationalism

    Get PDF
    Computationalism—the view that cognition is computation—has always been controversial. It faces two types of objection. According to insufficiency objections, computation is insufficient for some cognitive phenomenon X. According to objections from neural realization, cognitive processes are realized by neural processes, but neural processes have feature Y and having Y is incompatible with being (or realizing) computations. In this paper, I explain why computationalism has survived these objections. Insufficiency objections are at best partial: for all they establish, computation may be sufficient for cognitive phenomena other than X, may be part of the explanation for X, or both. Objections from neural realization are based either on a false contrast between feature Y and computation or on an account of computation that is too vague to yield the desired conclusion. To adjudicate the dispute between computationalism and its foes, I will conclude that we need a better account of computation

    Universal neural field computation

    Full text link
    Turing machines and G\"odel numbers are important pillars of the theory of computation. Thus, any computational architecture needs to show how it could relate to Turing machines and how stable implementations of Turing computation are possible. In this chapter, we implement universal Turing computation in a neural field environment. To this end, we employ the canonical symbologram representation of a Turing machine obtained from a G\"odel encoding of its symbolic repertoire and generalized shifts. The resulting nonlinear dynamical automaton (NDA) is a piecewise affine-linear map acting on the unit square that is partitioned into rectangular domains. Instead of looking at point dynamics in phase space, we then consider functional dynamics of probability distributions functions (p.d.f.s) over phase space. This is generally described by a Frobenius-Perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a dynamic field theory (DFT). Solving the Frobenius-Perron equation yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support, again. We call the resulting representation \emph{dynamic field automaton}.Comment: 21 pages; 6 figures. arXiv admin note: text overlap with arXiv:1204.546

    Neural computation of arithmetic functions

    Get PDF
    A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n -bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions

    Modal Considerations on Information Processing and Computation in the Nervous System

    Get PDF
    We can characterize computationalism very generally as a complex thesis with two main parts: the thesis that the brain (or the nervous system) is a computational system and the thesis that neural computation explains cognition. As Piccinini and Bahar (2012) point out, over the last six decades, computationalism has been the mainstream theory of cognition. Nevertheless, there is still substantial debate about which type of computation explains cognition, and computationalism itself still remains controversial. My aim in this paper is to make two main contributions to the debate about the first subthesis of computationalism, i.e. that the brain is a computational system. First, I want to offer an accurate elucidation of the notion relevant for understanding computationalism (the notion of computation) and clarify the relation between computation and information as well as the relations between both computation and information processing and the nervous system. Second, I want to argue against a peculiar form of computationalism: the thesis that neural processes are constitutively computational in some sense; that neural processes cannot be realized by a system that is not in some sense computational. I will call this thesis "modal computationalism". In particular, I want to argue that neural processing can be realized by a system that is not a sui generis computer (i. e., a computing system that is neither digital nor analog) and by a system that is not a generic computer (a computer in the most general sense: one that includes digital, analog, and any other kind of computation). Actual neural processing is presumed to be computational in these two senses (Piccinini and Bahar 2012). I will argue that, even if this is true, neural processing can be realized by a computing system that is not of the same kind as those that perform actual neural processing and even by a system that is not computational at all.Fil: Wajnerman Paz, Abel. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; Argentina. Universidad de Buenos Aires; Argentin

    Integrating Evolutionary Computation with Neural Networks

    Get PDF
    There is a tremendous interest in the development of the evolutionary computation techniques as they are well suited to deal with optimization of functions containing a large number of variables. This paper presents a brief review of evolutionary computing techniques. It also discusses briefly the hybridization of evolutionary computation and neural networks and presents a solution of a classical problem using neural computing and evolutionary computing technique
    • 

    corecore