3,491 research outputs found

    Global convergence and limit cycle behavior of weights of perceptron

    No full text
    In this paper, it is found that the weights of a perceptron are bounded for all initial weights if there exists a nonempty set of initial weights that the weights of the perceptron are bounded. Hence, the boundedness condition of the weights of the perceptron is independent of the initial weights. Also, a necessary and sufficient condition for the weights of the perceptron exhibiting a limit cycle behavior is derived. The range of the number of updates for the weights of the perceptron required to reach the limit cycle is estimated. Finally, it is suggested that the perceptron exhibiting the limit cycle behavior can be employed for solving a recognition problem when downsampled sets of bounded training feature vectors are linearly separable. Numerical computer simulation results show that the perceptron exhibiting the limit cycle behavior can achieve a better recognition performance compared to a multilayer perceptro

    Properties of an invariant set of weights of perceptrons

    Get PDF
    In this paper, the dynamics of weights of perceptrons are investigated based on the perceptron training algorithm. In particular, the condition that the system map is not injective is derived. Based on the derived condition, an invariant set that results to a bijective invariant map is characterized. Also, it is shown that some weights outside the invariant set will be moved to the invariant set. Hence, the invariant set is attracting. Computer numerical simulation results on various perceptrons with exhibiting various behaviors, such as fixed point behaviors, limit cycle behaviors and chaotic behaviors, are illustrated

    Invariant set of weight of perceptron trained by perceptron training algorithm

    Get PDF
    In this paper, an invariant set of the weight of the perceptron trained by the perceptron training algorithm is defined and characterized. The dynamic range of the steady state values of the weight of the perceptron can be evaluated via finding the dynamic range of the weight of the perceptron inside the largest invariant set. Also, the necessary and sufficient condition for the forward dynamics of the weight of the perceptron to be injective as well as the condition for the invariant set of the weight of the perceptron to be attractive is derived

    Modeling Financial Time Series with Artificial Neural Networks

    Full text link
    Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001

    Hardware neuromorphic learning systems utilizing memristive devices

    Get PDF
    As the efficiency of neuromorphic systems improves, biologically-inspired learning techniques are becoming more and more appealing for various computing applications, ranging from pattern and character recognition to general purpose reconfigurable logic. Due to their functional similarities to synapses in the brain, memristors are becoming a key element in the hardware realization of perceptron-based learning systems. By pairing memristive devices with a perceptron-based neuron model, previous work has shown that an efficient and low area neural logic block (NLB) can be developed. However, the use of a simple threshold activation function has limited the set of learnable functions for a single block, resulting in the need for multiple layers to implement certain functions. This complicates the training process, decreases the scalability of the system, and increases the overall energy and delay of large networks. In this work, three novel NLB designs are presented that overcome the limitations of previous hardware NLBs. First, an Adaptive Neural Logic Block (ANLB) and Robust Adaptive Neural Logic Block (RANLB) are proposed. By integrating an adaptive activation function into a perceptron model, these designs are capable of rapidly learning any function in a single layer. Next, a Multi Threshold Neural Logic Block (MTNLB) is proposed in which a static activation function is used to obtain the same functionality with minimal overhead. Using a Verilog-AMS model of a physical memristor, the proposed NLBs are applied to implement both reconfigurable logic and an Optical Character Recognition (OCR) system. When considering the MTNLB as a building block for ISCAS-85 benchmark circuits, it provides EDP improvements of over 90 percent over a standard LUT implementation on all benchmark circuits and up to a 99 percent improvement over a threshold NLB implementation. As a compromise, the ANLB and RANLB provide less of an EDP improvement in a static system, but achieve faster training convergence times for all functions. To show how the proposed design can simplify an OCR application, a simple 8x8 digit recognition system is developed. Using only four 16-input NLBs for each digit, the system is able to develop a model of each digit in only 90 us and correctly classify the majority of test images
    • …
    corecore