9 research outputs found

    Towards semen quality assessment using neural networks

    Get PDF

    Design and evaluation of neural classifiers

    Get PDF
    In this paper we propose a method for design of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction with Optimal Brain Damage pruning the test error estimate is used to optimize the network architecture. The scheme is evaluated on an artificial and a real world problem. INTRODUCTION Pattern recognition is an important aspect of most scientific fields and indeed the objective of most neural network applications. Some of the by now classic applications of neural networks like Sejnowski and Rosenbergs "NetTalk" concern classification of patterns into a finite number of categories. In modern approaches to pattern recognition the objective is to produce class probabilities for a given pattern. Using Bayes decision theory, the "hard" classifier selects the class with the highest class probability, henc..

    Methods for Model Complexity Reduction for the Nonlinear Calibration of Amplifiers Using Volterra Kernels

    Get PDF
    Volterra models allow modeling nonlinear dynamical systems, even though they require the estimation of a large number of parameters and have, consequently, potentially large computational costs. The pruning of Volterra models is thus of fundamental importance to reduce the computational costs of nonlinear calibration, and improve stability and speed, while preserving accuracy. Several techniques (LASSO, DOMP and OBS) and their variants (WLASSO and OBD) are compared in this paper for the experimental calibration of an IF amplifier. The results show that Volterra models can be simplified, yielding models that are 4–5 times sparser, with a limited impact on accuracy. About 6 dB of improved Error Vector Magnitude (EVM) is obtained, improving the dynamic range of the amplifiers. The Symbol Error Rate (SER) is greatly reduced by calibration at a large input power, and pruning reduces the model complexity without hindering SER. Hence, pruning allows improving the dynamic range of the amplifier, with almost an order of magnitude reduction in model complexity. We propose the OBS technique, used in the neural network field, in conjunction with the better known DOMP technique, to prune the model with the best accuracy. The simulations show, in fact, that the OBS and DOMP techniques outperform the others, and OBD, LASSO and WLASSO are, in turn, less efficient. A methodology for pruning in the complex domain is described, based on the Frisch–Waugh–Lovell (FWL) theorem, to separate the linear and nonlinear sections of the model. This is essential because linear models are used for equalization and cannot be pruned to preserve model generality vis-a-vis channel variations, whereas nonlinear models must be pruned as much as possible to minimize the computational overhead. This methodology can be extended to models other than the Volterra one, as the only conditions we impose on the nonlinear model are that it is feedforward and linear in the parameters

    Perbandingan Model FFNN dan GARCH pada Data IHSG Bursa Efek Jakarta

    Get PDF
    Tulisan ini membahas perbandingan model Feed Forward Neural Network (FFNN) dengan model Generalized Autoregressive Conditional Heteroscedasticity (GARCH) pada data time series. Pada model FFNN, metode pelatihan yang digunakan adalah Levenberg-Marquardt dengan fungsi aktivasi logistic sigmoid. Metode yang digunakan untuk mendapatkan arsitektur jaringan optimal dari model FFNN adalah metode pruning Optimal Brain Damage (OBD). Pada metode OBD bobot yang mempunyai saliency (pengaruh terhadap perubahan error) kecil dihapus dari jaringan. Dengan menghapus bobot yang tidak penting, diharapkan dapat meningkatkan performansi jaringan meliputi error pelatihan dan pengujian. Studi kasus dilakukan pada data time series Indek Harga Saham Gabungan (IHSG) Bursa Efek Jakarta. Kata Kunci : FFNN, Optimal Brain Damage, GARCH, IHS

    Reduced hyperBF networks : practical optimization, regularization, and applications in bioinformatics.

    Get PDF
    A hyper basis function network (HyperBF) is a generalized radial basis function network (RBF) where the activation function is a radial function of a weighted distance. The local weighting of the distance accounts for the variation in local scaling and discriminative power along each feature. Such generalization makes HyperBF networks capable of interpolating decision functions with high accuracy. However, such complexity makes HyperBF networks susceptible to overfitting. Moreover, training a HyperBF network demands weights, centers and local scaling factors to be optimized simultaneously. In the case of a relatively large dataset with a large network structure, such optimization becomes computationally challenging. In this work, a new regularization method that performs soft local dimension reduction and weight decay is presented. The regularized HyperBF (Reduced HyperBF) network is shown to provide classification accuracy comparable to a Support Vector Machines (SVM) while requiring a significantly smaller network structure. Furthermore, the soft local dimension reduction is shown to be informative for ranking features based on their localized discriminative power. In addition, a practical training approach for constructing HyperBF networks is presented. This approach uses hierarchal clustering to initialize neurons followed by a gradient optimization using a scaled Rprop algorithm with a localized partial backtracking step (iSRprop). Experimental results on a number of datasets show a faster and smoother convergence than the regular Rprop algorithm. The proposed Reduced HyperBF network is applied to two problems in bioinformatics. The first is the detection of transcription start sites (TSS) in human DNA. A novel method for improving the accuracy of TSS recognition for recently published methods is proposed. This method incorporates a new metric feature based on oligonucleotide positional frequencies. The second application is the accurate classification of microarray samples. A new feature selection algorithm based on a Reduced HyperBF network is proposed. The method is applied to two microarray datasets and is shown to select a minimal subset of features with high discriminative information. The algorithm is compared to two widely used methods and is shown to provide competitive results. In both applications, the final Reduced HyperBF network is used for higher level analysis. Significant neurons can indicate subpopulations, while local active features provide insight into the characteristics of the subpopulation in specific and the whole class in general

    Optimization of recurrent neural networks for time series modeling

    Get PDF

    A Quantitative Study Of Pruning By Optimal Brain Damage

    No full text
    The Optimal Brain Damage (OBD) scheme of Le Cun, Denker, and Solla, for pruning of feed-forward networks, has been implemented and applied to the contiguity classification problem. It is shown that OBD improves the learning curve (the test error as function of the number of examples). By inspecting the architectures obtained through pruning it is found that the networks with less parameters have the smallest test error, in agreement with "Ockhams Razor". Based on this, we propose a heuristic which selects the smallest successful architecture among a group of pruned networks and we show that it leads to very efficient optimization of the architecture. The validity of the approximations involved in OBD are discussed and it is found and they are surprisingly accurate for the problem studied. 1
    corecore