24,322 research outputs found
Big data and the SP theory of intelligence
This article is about how the "SP theory of intelligence" and its realisation
in the "SP machine" may, with advantage, be applied to the management and
analysis of big data. The SP system -- introduced in the article and fully
described elsewhere -- may help to overcome the problem of variety in big data:
it has potential as "a universal framework for the representation and
processing of diverse kinds of knowledge" (UFK), helping to reduce the
diversity of formalisms and formats for knowledge and the different ways in
which they are processed. It has strengths in the unsupervised learning or
discovery of structure in data, in pattern recognition, in the parsing and
production of natural language, in several kinds of reasoning, and more. It
lends itself to the analysis of streaming data, helping to overcome the problem
of velocity in big data. Central in the workings of the system is lossless
compression of information: making big data smaller and reducing problems of
storage and management. There is potential for substantial economies in the
transmission of data, for big cuts in the use of energy in computing, for
faster processing, and for smaller and lighter computers. The system provides a
handle on the problem of veracity in big data, with potential to assist in the
management of errors and uncertainties in data. It lends itself to the
visualisation of knowledge structures and inferential processes. A
high-parallel, open-source version of the SP machine would provide a means for
researchers everywhere to explore what can be done with the system and to
create new versions of it.Comment: Accepted for publication in IEEE Acces
Neural Mechanisms for Information Compression by Multiple Alignment, Unification and Search
This article describes how an abstract framework for perception and cognition may be realised in terms of neural mechanisms and neural processing.
This framework — called information compression by multiple alignment, unification and search (ICMAUS) — has been developed in previous research as a generalized model of any system for processing information, either natural or
artificial. It has a range of applications including the analysis and production of natural language, unsupervised inductive learning, recognition of objects and patterns, probabilistic reasoning, and others. The proposals in this article may be seen as an extension and development of
Hebb’s (1949) concept of a ‘cell assembly’.
The article describes how the concept of ‘pattern’ in the ICMAUS framework may be mapped onto a version of the cell
assembly concept and the way in which neural mechanisms may achieve the effect of ‘multiple alignment’ in the ICMAUS framework.
By contrast with the Hebbian concept of a cell assembly, it is proposed here that any one neuron can belong in one assembly and only one assembly. A key feature of present proposals, which is not part of the Hebbian concept, is that any cell assembly may contain ‘references’ or ‘codes’ that serve to identify one or more other cell assemblies. This mechanism allows information to be stored in a compressed form, it provides a robust mechanism by which assemblies may be connected to form hierarchies and other kinds of structure, it means that assemblies can express
abstract concepts, and it provides solutions to some of the other problems associated with cell assemblies.
Drawing on insights derived from the ICMAUS framework, the article also describes how learning may be achieved with neural mechanisms. This concept of learning is significantly different from the Hebbian concept and appears to provide a better account of what we know about human learning
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
Abnormal connectional fingerprint in schizophrenia: a novel network analysis of diffusion tensor imaging data
The graph theoretical analysis of structural magnetic resonance imaging (MRI) data has received a great deal of interest in recent years to characterize the organizational principles of brain networks and their alterations in psychiatric disorders, such as schizophrenia. However, the characterization of networks in clinical populations can be challenging, since the comparison of connectivity between groups is influenced by several factors, such as the overall number of connections and the structural abnormalities of the seed regions. To overcome these limitations, the current study employed the whole-brain analysis of connectional fingerprints in diffusion tensor imaging data obtained at 3 T of chronic schizophrenia patients (n = 16) and healthy, age-matched control participants (n = 17). Probabilistic tractography was performed to quantify the connectivity of 110 brain areas. The connectional fingerprint of a brain area represents the set of relative connection probabilities to all its target areas and is, hence, less affected by overall white and gray matter changes than absolute connectivity measures. After detecting brain regions with abnormal connectional fingerprints through similarity measures, we tested each of its relative connection probability between groups. We found altered connectional fingerprints in schizophrenia patients consistent with a dysconnectivity syndrome. While the medial frontal gyrus showed only reduced connectivity, the connectional fingerprints of the inferior frontal gyrus and the putamen mainly contained relatively increased connection probabilities to areas in the frontal, limbic, and subcortical areas. These findings are in line with previous studies that reported abnormalities in striatal–frontal circuits in the pathophysiology of schizophrenia, highlighting the potential utility of connectional fingerprints for the analysis of anatomical networks in the disorder
Ultimate Intelligence Part I: Physical Completeness and Objectivity of Induction
We propose that Solomonoff induction is complete in the physical sense via
several strong physical arguments. We also argue that Solomonoff induction is
fully applicable to quantum mechanics. We show how to choose an objective
reference machine for universal induction by defining a physical message
complexity and physical message probability, and argue that this choice
dissolves some well-known objections to universal induction. We also introduce
many more variants of physical message complexity based on energy and action,
and discuss the ramifications of our proposals.Comment: Under review at AGI-2015 conference. An early draft was submitted to
ALT-2014. This paper is now being split into two papers, one philosophical,
and one more technical. We intend that all installments of the paper series
will be on the arxi
Recommended from our members
Multifaceted Changes in Synaptic Composition and Astrocytic Involvement in a Mouse Model of Fragile X Syndrome.
Fragile X Syndrome (FXS), a common inheritable form of intellectual disability, is known to alter neocortical circuits. However, its impact on the diverse synapse types comprising these circuits, or on the involvement of astrocytes, is not well known. We used immunofluorescent array tomography to quantify different synaptic populations and their association with astrocytes in layers 1 through 4 of the adult somatosensory cortex of a FXS mouse model, the FMR1 knockout mouse. The collected multi-channel data contained approximately 1.6 million synapses which were analyzed using a probabilistic synapse detector. Our study reveals complex, synapse-type and layer specific changes in the neocortical circuitry of FMR1 knockout mice. We report an increase of small glutamatergic VGluT1 synapses in layer 4 accompanied by a decrease in large VGluT1 synapses in layers 1 and 4. VGluT2 synapses show a rather consistent decrease in density in layers 1 and 2/3. In all layers, we observe the loss of large inhibitory synapses. Lastly, astrocytic association of excitatory synapses decreases. The ability to dissect the circuit deficits by synapse type and astrocytic involvement will be crucial for understanding how these changes affect circuit function, and ultimately defining targets for therapeutic intervention
A Local Circuit Model of Learned Striatal and Dopamine Cell Responses under Probabilistic Schedules of Reward
Before choosing, it helps to know both the expected value signaled by a predictive cue and the associated uncertainty that the reward will be forthcoming. Recently, Fiorillo et al. (2003) found the dopamine (DA) neurons of the SNc exhibit sustained responses related to the uncertainty that a cure will be followed by reward, in addition to phasic responses related to reward prediction errors (RPEs). This suggests that cue-dependent anticipations of the timing, magnitude, and uncertainty of rewards are learned and reflected in components of the DA signals broadcast by SNc neurons. What is the minimal local circuit model that can explain such multifaceted reward-related learning? A new computational model shows how learned uncertainty responses emerge robustly on single trial along with phasic RPE responses, such that both types of DA responses exhibit the empirically observed dependence on conditional probability, expected value of reward, and time since onset of the reward-predicting cue. The model includes three major pathways for computing: immediate expected values of cures, timed predictions of reward magnitudes (and RPEs), and the uncertainty associated with these predictions. The first two model pathways refine those previously modeled by Brown et al. (1999). A third, newly modeled, pathway is formed by medium spiny projection neurons (MSPNs) of the matrix compartment of the striatum, whose axons co-release GABA and a neuropeptide, substance P, both at synapses with GABAergic neurons in the SNr and with the dendrites (in SNr) of DA neurons whose somas are in ventral SNc. Co-release enables efficient computation of sustained DA uncertainty responses that are a non-monotonic function of the conditonal probability that a reward will follow the cue. The new model's incorporation of a striatal microcircuit allowed it to reveals that variability in striatal cholinergic transmission can explain observed difference, between monkeys, in the amplitutude of the non-monotonic uncertainty function. Involvement of matriceal MSPNs and striatal cholinergic transmission implpies a relation between uncertainty in the cue-reward contigency and action-selection functions of the basal ganglia. The model synthesizes anatomical, electrophysiological and behavioral data regarding the midbrain DA system in a novel way, by relating the ability to compute uncertainty, in parallel with other aspects of reward contingencies, to the unique distribution of SP inputs in ventral SN.National Science Foundation (SBE-354378); Higher Educational Council of Turkey; Canakkale Onsekiz Mart University of Turke
- …