3 research outputs found

    Optimizing Echo State Networks for Static Pattern Recognition

    Get PDF
    Static pattern recognition requires a machine to classify an object on the basis of a combination of attributes and is typically performed using machine learning techniques such as support vector machines and multilayer perceptrons. Unusually, in this study, we applied a successful time-series processing neural network architecture, the echo state network (ESN), to a static pattern recognition task. The networks were presented with clamped input data patterns, but in this work, they were allowed to run until their output units delivered a stable set of output activations, in a similar fashion to previous work that focused on the behaviour of ESN reservoir units. Our aim was to see if the short-term memory developed by the reservoir and the clamped inputs could deliver improved overall classification accuracy. The study utilized a challenging, high dimensional, real-world plant species spectroradiometry classification dataset with the objective of accurately detecting one of the world’s top 100 invasive plant species. Surprisingly, the ESNs performed equally well with both unsettled and settled reservoirs. Delivering a classification accuracy of 96.60%, the clamped ESNs outperformed three widely used machine learning techniques, namely support vector machines, extreme learning machines and multilayer perceptrons. Contrary to past work, where inputs were clamped until reservoir stabilization, it was found that it was possible to obtain similar classification accuracy (96.49%) by clamping the input patterns for just two repeats. The chief contribution of this work is that a recurrent architecture can get good classification accuracy, even while the reservoir is still in an unstable state

    Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks

    Get PDF
    Emmerich C, Reinhart F, Steil JJ. Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks. In: Proc. Int. Conf. Artificial Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg; 2010: 148-153

    Reliability of Extreme Learning Machines

    Get PDF
    Neumann K. Reliability of Extreme Learning Machines. Bielefeld: Bielefeld University Library; 2014.The reliable application of machine learning methods becomes increasingly important in challenging engineering domains. In particular, the application of extreme learning machines (ELM) seems promising because of their apparent simplicity and the capability of very efficient processing of large and high-dimensional data sets. However, the ELM paradigm is based on the concept of single hidden-layer neural networks with randomly initialized and fixed input weights and is thus inherently unreliable. This black-box character usually repels engineers from application in potentially safety critical tasks. The problem becomes even more severe since, in principle, only sparse and noisy data sets can be provided in such domains. The goal of this thesis is therefore to equip the ELM approach with the abilities to perform in a reliable manner. This goal is approached in three aspects by enhancing the robustness of ELMs to initializations, make ELMs able to handle slow changes in the environment (i.e. input drifts), and allow the incorporation of continuous constraints derived from prior knowledge. It is shown in several diverse scenarios that the novel ELM approach proposed in this thesis ensures a safe and reliable application while simultaneously sustaining the full modeling power of data-driven methods
    corecore