60 research outputs found

    AParallel Programming Model for Irregular Dynamic Neural Networks

    No full text
    The compilation of high-level programming languages for parallel machines faces two challenges: maximizing data/process locality and balancing load. No solutions for the general case are known that solve both problems at once. The present paper describesaprogramming model that allows to solve both problems for the special case of neural network learning algorithms, even for irregular networks with dynamically changing topology (constructive neural algorithms). The model is based on the observation that such algorithms predominantly execute local operations (on nodes and connections of the network), reductions, and broadcasts. The model is concretized in an object-centered procedural language called CuPit. The speci c properties of the model are introduced via 1. special categories (analog tocategories \record" or \array " in other languages) of object types: \connection", \node", and \network", 2. 3-fold nested parallelism, described via group procedure calls (levels: network replicates, node groups, connections at a node), and 3. special operations for manipulation of the neural network topology. The language is completely abstract: No aspects of the parallel implementation such as number of processors, data distribution, process distribution, execution model etc. are visible in user programs. The compiler can derive most information relevant for the generation of e cient code from unannotated source code. Therefore, CuPit programs are e ciently portable. Acompiler for CuPit has been built for the MasPar MP-1/MP-2 using compilation techniques that can also be applied to most other parallel machines. The paper shortly presents the main ideas of the techniques used and results obtained by the various optimizations

    Some Notes on Neural Learning Algorithm Benchmarking

    No full text
    New neural learning algorithms are often benchmarked only poorly. This article gathers some important DOs and DON'Ts for researchers in order to improve on that situation. The essential requirements are (1) Volume: benchmarking has to be broad enough, i.e., must use several problems; (2) Validity: common errors that invalidate the results have to be avoided; (3) Reproducibility: benchmarking has to be documented well enough to be completely reproducible; and (4) Comparability: benchmark results should, if possible, be directly comparable with the results achieved by others using di erent algorithms

    Contents

    No full text
    113 articles about neural network learning algorithms published in 1993 and 1994 are examined for the amount of experimental evaluation they contain. Every third of them does employ not even a single realistic or real learning problem. Only 6 % of all articles present results for more than one problem using real world data. Furthermore, one third of all articles does not present any quantitative comparison with a previously known algorithm. These results indicate that the quality of research in the area of neural network learning algorithms needs improvement. The publication standards should be raised and easily accessibl

    A Study of Experimental Evaluations of Neural Network Learning Algorithms: Current Research Practice

    No full text
    113 articles about neural network learning algorithms published in 1993 and 1994 are examined for the amount of experimental evaluation they contain. Every third of them does employ not even a single realistic or real learning problem. Only 6% of all articles present results for more than one problem using real world data. Furthermore, one third of all articles does not present any quantitative comparison with a previously known algorithm. These results indicate that the quality of research in the area of neural network learning algorithms needs improvement. The publication standards should be raised and easily accessible collections of example problems be built. Contents 1 Introduction 2 2 Methodology 2 2.1 Approach : : : : : : : : : : : : : : 2 2.2 Method : : : : : : : : : : : : : : : 2 2.3 Limitations : : : : : : : : : : : : : 4 3 Results and Discussion 5 4 Conclusion 7 A Collected data 7 A.1 Neural Computation 1993 : : : : : 7 A.2 Neural Computation 1994, 1--4 : : 8 A.3 Neural..

    Comparing Adaptive and Non-Adaptive Connection Pruning With Pure Early Stopping

    No full text
    Abstract|Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization, as is shown in this empirical study. However, an open problem in the pruning methods known today (OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This work presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustmentby the user. Results of statistical signi cance tests comparing autoprune, lprune, and static networks with early stopping are given, based on extensive experimentation with 14 di erent problems. The results indicate that training with pruning is often signi cantly better and rarely signi cantly worse than training with early stopping without pruning. Furthermore, lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required. 1 Pruning and Generalization The principal idea of pruning is to reduce the number of free parameters in the network by removin

    Connection Pruning with Static and Adaptive Pruning Schedules

    No full text
    Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization, as is shown in this empirical study. However, an open problem in the pruning methods known today (e.g. OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This work presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. Results of statistical significance tests comparing autoprune, lprune, and static networks with early stopping are given, based on extensive experimentation with 14 different problems. The results indicate that training with pruning is often significantly better and rarely significantly worse than training with early stopping without pruning. Furthermore, lprune is often superior to autoprune (which is..

    PROBEN1 - a set of neural network benchmark problems and benchmarking rules

    No full text
    Proben1 is a collection of problems for neural network learning in the realm of pattern classi-cation and function approximation plus a set of rules and conventions for carrying out benchmark tests with these or similar problems. Proben1 contains 15 data sets from 12 di erent domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. The datasets are all presented in the same simple format, using an attribute representation that can directly be used for neural network training. Along with the datasets, Proben1 de nes a set of rules for how to conduct and how to document neural network benchmarking. The purpose of the problem and rule collection is to give researchers easy access to data for the evaluation of their algorithms and networks and to make direct comparison of the published results feasible. This report describes the datasets and the benchmarking rules. It also gives some basic performance measures indicating the di culty of the various problems. These measures can be used as baselines for comparison
    • …
    corecore