3 research outputs found
Another Look at Quantum Neural Computing
The term quantum neural computing indicates a unity in the functioning of the
brain. It assumes that the neural structures perform classical processing and
that the virtual particles associated with the dynamical states of the
structures define the underlying quantum state. We revisit the concept and also
summarize new arguments related to the learning modes of the brain in response
to sensory input that may be aggregated in three types: associative,
reorganizational, and quantum. The associative and reorganizational types are
quite apparent based on experimental findings; it is much harder to establish
that the brain as an entity exhibits quantum properties. We argue that the
reorganizational behavior of the brain may be viewed as inner adjustment
corresponding to its quantum behavior at the system level. Not only neural
structures but their higher abstractions also may be seen as whole entities. We
consider the dualities associated with the behavior of the brain and how these
dualities are bridged.Comment: 10 pages, 4 figures; Based on lecture given at Czech Technical
University, Prague on June 25, 2009. This revision adds clarifying remarks
and corrects typographical error
Probability and the Classical/Quantum Divide
This paper considers the problem of distinguishing between classical and
quantum domains in macroscopic phenomena using tests based on probability and
it presents a condition on the ratios of the outcomes being the same (Ps) to
being different (Pn). Given three events, Ps/Pn for the classical case, where
there are no 3-way coincidences, is one-half whereas for the quantum state it
is one-third. For non-maximally entangled objects we find that so long as r <
5.83, we can separate them from classical objects using a probability test. For
maximally entangled particles (r = 1), we propose that the value of 5/12 be
used for Ps/Pn to separate classical and quantum states when no other
information is available and measurements are noisy.Comment: 12 pages; 1 figur
Neural Network Capacity for Multilevel Inputs
This paper examines the memory capacity of generalized neural networks.
Hopfield networks trained with a variety of learning techniques are
investigated for their capacity both for binary and non-binary alphabets. It is
shown that the capacity can be much increased when multilevel inputs are used.
New learning strategies are proposed to increase Hopfield network capacity, and
the scalability of these methods is also examined in respect to size of the
network. The ability to recall entire patterns from stimulation of a single
neuron is examined for the increased capacity networks.Comment: 24 pages,17 figure