2,167 research outputs found
Superpositional Quantum Network Topologies
We introduce superposition-based quantum networks composed of (i) the
classical perceptron model of multilayered, feedforward neural networks and
(ii) the algebraic model of evolving reticular quantum structures as described
in quantum gravity. The main feature of this model is moving from particular
neural topologies to a quantum metastructure which embodies many differing
topological patterns. Using quantum parallelism, training is possible on
superpositions of different network topologies. As a result, not only classical
transition functions, but also topology becomes a subject of training. The main
feature of our model is that particular neural networks, with different
topologies, are quantum states. We consider high-dimensional dissipative
quantum structures as candidates for implementation of the model.Comment: 10 pages, LaTeX2
Backpropagation training in adaptive quantum networks
We introduce a robust, error-tolerant adaptive training algorithm for
generalized learning paradigms in high-dimensional superposed quantum networks,
or \emph{adaptive quantum networks}. The formalized procedure applies standard
backpropagation training across a coherent ensemble of discrete topological
configurations of individual neural networks, each of which is formally merged
into appropriate linear superposition within a predefined, decoherence-free
subspace. Quantum parallelism facilitates simultaneous training and revision of
the system within this coherent state space, resulting in accelerated
convergence to a stable network attractor under consequent iteration of the
implemented backpropagation algorithm. Parallel evolution of linear superposed
networks incorporating backpropagation training provides quantitative,
numerical indications for optimization of both single-neuron activation
functions and optimal reconfiguration of whole-network quantum structure.Comment: Talk presented at "Quantum Structures - 2008", Gdansk, Polan
Inhibition in multiclass classification
The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions,
that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a
classification function that embodies unselective inhibition and train it in the large margin classifier framework. Inhibition leads to more robust classifiers in the sense that they perform better on larger areas of appropriate hyperparameters when assessed with leave-one-out strategies. We also show that the classifier with inhibition is a tight bound to probabilistic exponential models and is Bayes consistent for 3-class problems.
These properties make this approach useful for data sets with a limited number of labeled examples. For larger data sets, there is no significant comparative advantage to other multiclass SVM approaches
Inhibition in multiclass classification
The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions,
that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a
classification function that embodies unselective inhibition and train it in the large margin classifier framework. Inhibition leads to more robust classifiers in the sense that they perform better on larger areas of appropriate hyperparameters when assessed with leave-one-out strategies. We also show that the classifier with inhibition is a tight bound to probabilistic exponential models and is Bayes consistent for 3-class problems.
These properties make this approach useful for data sets with a limited number of labeled examples. For larger data sets, there is no significant comparative advantage to other multiclass SVM approaches
A Family of Maximum Margin Criterion for Adaptive Learning
In recent years, pattern analysis plays an important role in data mining and
recognition, and many variants have been proposed to handle complicated
scenarios. In the literature, it has been quite familiar with high
dimensionality of data samples, but either such characteristics or large data
have become usual sense in real-world applications. In this work, an improved
maximum margin criterion (MMC) method is introduced firstly. With the new
definition of MMC, several variants of MMC, including random MMC, layered MMC,
2D^2 MMC, are designed to make adaptive learning applicable. Particularly, the
MMC network is developed to learn deep features of images in light of simple
deep networks. Experimental results on a diversity of data sets demonstrate the
discriminant ability of proposed MMC methods are compenent to be adopted in
complicated application scenarios.Comment: 14 page
- …