243 research outputs found

    Die technische Implementierung neuronaler Netzwerke

    Full text link
    Ebenso wie Digitalrechner sind neuronale Netzwerke, z.B. das Gehirn, informationsverarbeitende Systeme. Das ist aber auch schon fast die einzige Aehnlichkeit. Wie wir an uns selbst sehen, haben beide Arten von Systemen grundlegend verschiedene Faehigkeiten: Digitalrechner eignen sich ausgezeichnet zum schnellen und fast beliebig praezisen Rechnen mit Zahlen; dafuer wurden sie urspruenglich auch konstruiert. Andere Anwendungen kamen im Laufe der Zeit hinzu, z.B. die Verwaltung sehr gross er Informationsmengen in Datenbanken, d.h. Speicherung und gezieltes Wiederauffinden anhand einfacher Suchkriterien. Beides ist fuer den Menschen extrem muehsam. Hingegen bewaeltigt er mit Leichtigkeit Aufgaben, die selbst fuer die modernsten Parallelrechner noch unloesbar sind. Jedes Kind kann im Bruchteil einer Sekunde die Gesichter seiner Eltern von denen potentieller Feinde unterscheiden, weil dies einen evolutionaeren Vorteil darstellt. Neben den exzellenten Faehigkeiten zur Bildverarbeitung koennen wir mittels unseres neuronalen Netzwerks Sprache verstehen und den komplexen Muskelapparat unseres Koerpers mit phantastischer Genauigkeit kontrollieren. Allerdings muessen wir die meisten dieser Faehigkeiten erlernen, waehrend ein Digitalrechner seine Aufgaben erfuellt, sobald ein entsprechendes Programm entwickelt wurde

    Duration of asynchronous operations in distributed systems

    Get PDF
    A distributed asynchronous system is investigated. Its processing elements execute common operations concurrently and distributively. They are implemented as combinatorial circuits and exchange data via open collector bus lines. A method is presented to identify and to minimize the duration of an operation and therefore to increase the performance of the system. No hardware modifications are required

    MONNET: a software system for modular neural networks based on object passing

    Full text link
    Modular neural networks integrate several neural networks and possibly standard processing methods. Tackling such models is a challenge, since various modules have to be combined, either sequentially or in parallel, and the simulations are time critical in many cases. For this, specific tools are prerequisite that are both flexible and efficient. We have developed the MONNET software system that supports the investigation of complex modular models. The design of MONNET is based on the object oriented paradigm, the environment is C++/UNIX. The basic concepts are dynamic modularity, object passing, scalability, reusability, and extensibility. MONNET features flexible and compact definition of complex simulations, and minimal overhead in order to run computationally demanding simulations efficiently

    Improved 3-Line Hardware Synchronization

    Full text link
    A new procedure is proposed to synchronize processors of a distributed system, which concurrently execute a common process consisting of a sequence of operations. The procedure is an extension of that used for the 1987 IEEE Futurebus Standard. It is based on global synchronization lines and a distributed synchronizer, and requires only minor modifications of existing hardware. The procedure allows to carry out two alternative synchronization protocols. As usual, an operation may be terminated by the last processor having finished its part of the operation. Alternatively, the operation may also be terminated by the first processor being ready. Application of this second procedure, e.g., to bus arbitration, allows to reduce the arbitration time in average by a factor of 2

    Quantifying a critical training set size for generalization and overfitting using teacher neural networks

    Full text link
    Teacher neural networks are a systematic experimental approach to study neural networks. A teacher is a neural network that is employed to generate the examples of the training and the testing set. The weights of the teacher and the input parts of the examples are set according to some probability distribution. The input parts are then presented to the teacher neural network and recorded together with its response. A pupil neural network is then trained on this data. Hence, a neural network instead of a real or synthetic application defines the task, according to which the performance of the pupil is investigated. One issue is the dependence of the training success on the training set size. Surprisingly, there exists a critical value above which the training error drops to zero. This critical training set size is proportional to the number of weights in the neural network. A sudden transition exists for the generalization capability, too: the generalization error measured on a large independent testing set drops to zero, and the effect of overfitting vanishes. Thus, there are two regions with a sudden transition in-between: below the critical training set size, training and generalization fails, and severe overfitting occurs; above the critical training set size, training and generalization is perfect and there is no overfitting

    NERV: A Parallel Processor for Standard Genetic Algorithms

    Full text link
    This paper describes the implementation of a standard genetic algorithm (GA) on the MIMD multiprocessor system NERV. It discusses the special features of the NERV hardware which can be utilized for an efficient implementation of a GA without changing the structure of the algorithm

    VLSI Implementierung eines parallelen Hough-Transformations-Prozessors mit dynamisch nachladbaren Mustern

    Full text link
    In 1.0 5m CMOS Technik wurde ein Prozessor zur parallelen Verarbeitung einer speziellen Hough-Transformation entwickelt. Bei der auf 50 MHz ausgelegten Taktfrequenz koennen 6.4 x 10E+10 Objektmuster pro Sekunde detektiert werden. Bis zu 5 x 10E+7 zu detektierende Suchmuster koennen pro Sekunde in den Prozessor geladen werden. Damit koennen erstmals Echtzeitapplikationen in der Bildverarbeitung im Mikrosekundenbereich erschlossen werden

    Neural Classifier Systems for Histopathologic Diagnosis

    Full text link
    Neural network and statistical classification methods were applied to derive an objective grading for moderately and poorly differentiated lesions, based on characteristics of the nuclear placement patterns. Using a multilayer network after abbreviated training as a feature extractor followed by a quadratic Bayesian classifier allowed grade assignment agreeing with visual diagnostic consensus in 96% of fields from the training set of 500 fields, and a 77% of 130 fields of a test set

    Fault Propagation Analysis on the Transaction-Level Model of an Acquisition System with Bus Fallback Modes

    Get PDF
    The early fault analysis is mandatory for safety critical systems, which are required to operate safely even on the presence of faults. System design methodologies tackle the early design and verification of systems by allowing several abstraction for their models, but still offer only digital bit faults as fault models. Therefore we develop a signal fault model for the Transaction-Level Modeling. We extend the TLM generic payload by the signal characteristics: Voltage level, delay, slope time and glitches. In order to analyze and process these, a TLM bus model is created, with which signal faults can be detected and translated to data failures. Furthermore, inserting this bus in an acquisition system and implementing fallback modes for the bus operation, the propagation of the signal faults through the system can be assessed. Simulating this model using probability distributions for the different signal faults, 5516 faults have been generated. From these, 5143 have been recovered, 239 isolated and 134 turned into failures

    Half-Optimal Error Diffusion for Binary Fourier Transform Holograms

    Full text link
    The error diffusion method was investigated with novel diffusion coefficients for generating binary Fourier transform holograms. By defining an error measure, the quality of the reconstruction from such holograms was estimated. Computer simulated reconstructions are presented
    • …
    corecore