23 research outputs found

    The Shallow and the Deep:A biased introduction to neural networks and old school machine learning

    Get PDF
    The Shallow and the Deep is a collection of lecture notes that offers an accessible introduction to neural networks and machine learning in general. However, it was clear from the beginning that these notes would not be able to cover this rapidly changing and growing field in its entirety. The focus lies on classical machine learning techniques, with a bias towards classification and regression. Other learning paradigms and many recent developments in, for instance, Deep Learning are not addressed or only briefly touched upon.Biehl argues that having a solid knowledge of the foundations of the field is essential, especially for anyone who wants to explore the world of machine learning with an ambition that goes beyond the application of some software package to some data set. Therefore, The Shallow and the Deep places emphasis on fundamental concepts and theoretical background. This also involves delving into the history and pre-history of neural networks, where the foundations for most of the recent developments were laid. These notes aim to demystify machine learning and neural networks without losing the appreciation for their impressive power and versatility

    Center for Space Microelectronics Technology. 1993 Technical Report

    Get PDF
    The 1993 Technical Report of the Jet Propulsion Laboratory Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the Center during the past year. The report lists 170 publications, 193 presentations, and 84 New Technology Reports and patents. The 1993 Technical Report of the Jet Propulsion Laboratory Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the Center during the past year. The report lists 170 publications, 193 presentations, and 84 New Technology Reports and patents

    A generalised feedforward neural network architecture and its applications to classification and regression

    Get PDF
    Shunting inhibition is a powerful computational mechanism that plays an important role in sensory neural information processing systems. It has been extensively used to model some important visual and cognitive functions. It equips neurons with a gain control mechanism that allows them to operate as adaptive non-linear filters. Shunting Inhibitory Artificial Neural Networks (SIANNs) are biologically inspired networks where the basic synaptic computations are based on shunting inhibition. SIANNs were designed to solve difficult machine learning problems by exploiting the inherent non-linearity mediated by shunting inhibition. The aim was to develop powerful, trainable networks, with non-linear decision surfaces, for classification and non-linear regression tasks. This work enhances and extends the original SIANN architecture to a more general form called the Generalised Feedforward Neural Network (GFNN) architecture, which contains as subsets both SIANN and the conventional Multilayer Perceptron (MLP) architectures. The original SIANN structure has the number of shunting neurons in the hidden layers equal to the number of inputs, due to the neuron model that is used having a single direct excitatory input. This was found to be too restrictive, often resulting in inadequately small or inordinately large network structures

    First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    Get PDF
    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered

    Stepwise Evolutionary Training Strategies for Hardware Neural Networks

    Get PDF
    Analog and mixed-signal implementations of artificial neural networks usually lack an exact numerical model due to the unavoidable device variations introduced during manufacturing and the temporal fluctuations in the internal analog signals. Evolutionary algorithms are particularly well suited for the training of such networks since they do not require detailed knowledge of the system to be optimized. In order to make best use of the high network speed, fast and simple training approaches are required. Within the scope of this thesis, a stepwise training approach has been devised that allows for the use of simple evolutionary algorithms to efficiently optimize the synaptic weights of a fast mixed-signal neural network chip. The training strategy is tested on a set of nine well-known classification benchmarks: the breast cancer, diabetes, heart disease, liver disorder, iris plant, wine, glass, E.coli, and yeast data sets. The obtained classification accuracies are shown to be more than competitive to those achieved by software-implemented neural networks and are comparable to the best reported results of other classification algorithms that could be found in literature for these benchmarks. The presented training method is readily suited for a parallel implementation and is fit for use in conjunction with a specialized coprocessor architecture that speeds up evolutionary algorithms by performing the time-consuming genetic operations within a configurable logic. This way, the proposed strategy can fully benefit from the speed of the neural hardware and thus provides efficient means for the training of large networks on the used mixed-signal chip for demanding real-world classification tasks
    corecore