The problem of learning in the absence of external intelligence is discussed
in the context of a simple model. The model consists of a set of randomly
connected, or layered integrate-and fire neurons. Inputs to and outputs from
the environment are connected randomly to subsets of neurons. The connections
between firing neurons are strengthened or weakened according to whether the
action is successful or not. The model departs from the traditional
gradient-descent based approaches to learning by operating at a highly
susceptible ``critical'' state, with low activity and sparse connections
between firing neurons. Quantitative studies on the performance of our model in
a simple association task show that by tuning our system close to this critical
state we can obtain dramatic gains in performance.Comment: 9 pages (TeX), 3 figures supllied on reques