5,937 research outputs found
Are Numerical Symbols Fundamental to Neural Computation?
Abstract: Neuroclassicism is the view that cognition is computation and that core mental processes, such as perception, memory, and reasoning are products of digital computations realized in neural tissue. Cognitive psychologist C. R. Gallistel uses this classical framework to argue that all cognitive information processing is based on symbolic operations performed over quantitative values (i.e. numbers) stored in the brain, much like a digital computer. Assuming this hypothesis, he investigates how the brain stores quantitative information (i.e. the numerical symbols involved in neural computation). He claims that it is more plausible that memories for numbers are stored within molecular mechanisms inside the neuron, rather than within specific patterns of cell connectivity (the substrate for memory storage assumed by the traditional Hebbian plastic synapse model). In this paper, I dissect and critique Gallistel’s argument, which I find to be undermined by the findings of contemporary neuroscience
Parameters of Combinatorial Neural Codes
Motivated by recent developments in the mathematical theory of neural codes,
we study the structure of error-correcting codes for the binary asymmetric
channel. These are also known as combinatorial neural codes and can be seen as
the discrete version of neural receptive field codes. We introduce two notions
of discrepancy between binary vectors, which are not metric functions in
general but nonetheless capture the mathematics of the binary asymmetric
channel. In turn, these lead to two new fundamental parameters of combinatorial
neural codes, both of which measure the probability that the maximum likelihood
decoder fails. We then derive various bounds for the cardinality and weight
distribution of a combinatorial neural code in terms of these new parameters,
giving examples of codes meeting the bounds with equality
Information Compression, Intelligence, Computing, and Mathematics
This paper presents evidence for the idea that much of artificial
intelligence, human perception and cognition, mainstream computing, and
mathematics, may be understood as compression of information via the matching
and unification of patterns. This is the basis for the "SP theory of
intelligence", outlined in the paper and fully described elsewhere. Relevant
evidence may be seen: in empirical support for the SP theory; in some
advantages of information compression (IC) in terms of biology and engineering;
in our use of shorthands and ordinary words in language; in how we merge
successive views of any one thing; in visual recognition; in binocular vision;
in visual adaptation; in how we learn lexical and grammatical structures in
language; and in perceptual constancies. IC via the matching and unification of
patterns may be seen in both computing and mathematics: in IC via equations; in
the matching and unification of names; in the reduction or removal of
redundancy from unary numbers; in the workings of Post's Canonical System and
the transition function in the Universal Turing Machine; in the way computers
retrieve information from memory; in systems like Prolog; and in the
query-by-example technique for information retrieval. The chunking-with-codes
technique for IC may be seen in the use of named functions to avoid repetition
of computer code. The schema-plus-correction technique may be seen in functions
with parameters and in the use of classes in object-oriented programming. And
the run-length coding technique may be seen in multiplication, in division, and
in several other devices in mathematics and computing. The SP theory resolves
the apparent paradox of "decompression by compression". And computing and
cognition as IC is compatible with the uses of redundancy in such things as
backup copies to safeguard data and understanding speech in a noisy
environment
- …