1,074 research outputs found
Replica Symmetry Breaking and the Kuhn-Tucker Cavity Method in simple and multilayer Perceptrons
Within a Kuhn-Tucker cavity method introduced in a former paper, we study
optimal stability learning for situations, where in the replica formalism the
replica symmetry may be broken, namely
(i) the case of a simple perceptron above the critical loading, and
(ii) the case of two-layer AND-perceptrons, if one learns with maximal
stability.
We find that the deviation of our cavity solution from the replica symmetric
one in these cases is a clear indication of the necessity of replica symmetry
breaking. In any case the cavity solution tends to underestimate the storage
capabilities of the networks.Comment: 32 pages, LaTex Source with 9 .eps-files enclosed, accepted by J.
Phys I (France
Corporation robots
Nowadays, various robots are built to perform multiple tasks. Multiple robots working
together to perform a single task becomes important. One of the key elements for multiple
robots to work together is the robot need to able to follow another robot. This project is
mainly concerned on the design and construction of the robots that can follow line. In this
project, focuses on building line following robots leader and slave. Both of these robots will
follow the line and carry load. A Single robot has a limitation on handle load capacity such as
cannot handle heavy load and cannot handle long size load. To overcome this limitation an
easier way is to have a groups of mobile robots working together to accomplish an aim that
no single robot can do alon
Empirical learning aided by weak domain knowledge in the form of feature importance
Standard hybrid learners that use domain knowledge require stronger knowledge that is hard and expensive to acquire. However, weaker domain knowledge can benefit from prior knowledge while being cost effective. Weak knowledge in the form of feature relative importance (FRI) is presented and explained. Feature relative importance is a real valued approximation of a feature’s importance provided by experts. Advantage of using this knowledge is demonstrated by IANN, a modified multilayer neural network algorithm. IANN is a very simple modification of standard neural network algorithm but attains significant performance gains. Experimental results in the field of molecular biology show higher performance over other empirical learning algorithms including standard backpropagation and support vector machines. IANN performance is even comparable to a theory refinement system KBANN that uses stronger domain knowledge. This shows Feature relative importance can improve performance of existing empirical learning algorithms significantly with minimal effort
Faster Training in Nonlinear ICA using MISEP
MISEP has been proposed as a generalization of the INFOMAX method in two directions: (1) handling of nonlinear mixtures, and (2) learning the nonlinearities to be used at the outputs, making the method suitable to the separation of components with a wide range of statistical distributions. In all implementations up to now, MISEP had used multilayer perceptrons (MLPs) to perform the nonlinear ICA operation. Use of MLPs sometimes leads to a relatively slow training. This has been attributed, at least in part, to the non-local character of the MLP's units. This paper investigates the possibility of using a network of radial basis function (RBF) units for performing the nonlinear ICA operation. It shows that the local character of the RBF network's units allows a significant speedup in the training of the system. The paper gives a brief introduction to the basics of the MISEP method, and presents experimental results showing the speed advantage of using an RBF-based network to perform the ICA operation
Regression modeling for digital test of ΣΔ modulators
The cost of Analogue and Mixed-Signal circuit
testing is an important bottleneck in the industry, due to timeconsuming
verification of specifications that require state-ofthe-
art Automatic Test Equipment. In this paper, we apply
the concept of Alternate Test to achieve digital testing of
converters. By training an ensemble of regression models that
maps simple digital defect-oriented signatures onto Signal to
Noise and Distortion Ratio (SNDR), an average error of 1:7%
is achieved. Beyond the inference of functional metrics, we show
that the approach can provide interesting diagnosis information.Ministerio de Educación y Ciencia TEC2007-68072/MICJunta de Andalucía TIC 5386, CT 30
Towards a learning-theoretic analysis of spike-timing dependent plasticity
This paper suggests a learning-theoretic perspective on how synaptic
plasticity benefits global brain functioning. We introduce a model, the
selectron, that (i) arises as the fast time constant limit of leaky
integrate-and-fire neurons equipped with spiking timing dependent plasticity
(STDP) and (ii) is amenable to theoretical analysis. We show that the selectron
encodes reward estimates into spikes and that an error bound on spikes is
controlled by a spiking margin and the sum of synaptic weights. Moreover, the
efficacy of spikes (their usefulness to other reward maximizing selectrons)
also depends on total synaptic strength. Finally, based on our analysis, we
propose a regularized version of STDP, and show the regularization improves the
robustness of neuronal learning when faced with multiple stimuli.Comment: To appear in Adv. Neural Inf. Proc. System
Functional Multi-Layer Perceptron: a Nonlinear Tool for Functional Data Analysis
In this paper, we study a natural extension of Multi-Layer Perceptrons (MLP)
to functional inputs. We show that fundamental results for classical MLP can be
extended to functional MLP. We obtain universal approximation results that show
the expressive power of functional MLP is comparable to that of numerical MLP.
We obtain consistency results which imply that the estimation of optimal
parameters for functional MLP is statistically well defined. We finally show on
simulated and real world data that the proposed model performs in a very
satisfactory way.Comment: http://www.sciencedirect.com/science/journal/0893608
Function Approximation With Multilayered Perceptrons Using L1 Criterion
Kaedah ralat kuasa dua terkecil atau kaedah kriteria L2 biasanya digunakan bagi
persoalan penghampiran fungsian dan pengitlakan di dalam algoritma perambatan balik
ralat. Tujuan kajian ini adalah untuk mempersembahkan suatu kriteria ralat mutlak
terkecil bagi perambatan balik sigmoid selain daripada kriteria ralat kuasa dua terkecil
yang biasa digunakan. Kami membentangkan struktur fungsi ralat untuk diminimumkan
serta hasil pembezaan terhadap pemberat yang akan dikemaskinikan. Tumpuan ·kajian
ini ialah terhadap model perseptron multilapisan yang mempunyai satu lapisan
tersembunyi tetapi perlaksanaannya boleh dilanjutkan kepada model yang mempunyai
dua atau lebih lapisan tersembunyi.
The least squares error or L2 criterion approach has been commonly used in functional
approximation and generalization in the error backpropagation algorithm. The purpose
of this study is to present an absolute error criterion for the sigmoidal backpropagatioll I rather than the usual least squares error criterion. We present the structure of the error
function to be minimized and its derivatives with respect to the weights to be updated.
The focus in the study is on the single hidden layer multilayer perceptron (MLP) but the
implementation may be extended to include two or more hidden layers
- …