1,279 research outputs found

    Computational neural learning formalisms for manipulator inverse kinematics

    Get PDF
    An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints

    Recurrent backpropagation and the dynamical approach to adaptive neural computation

    Get PDF
    Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks with recurrent connections will be reviewed. It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control

    From Bidirectional Associative Memory to a noise-tolerant, robust Protein Processor Associative Memory

    Get PDF
    AbstractProtein Processor Associative Memory (PPAM) is a novel architecture for learning associations incrementally and online and performing fast, reliable, scalable hetero-associative recall. This paper presents a comparison of the PPAM with the Bidirectional Associative Memory (BAM), both with Kosko's original training algorithm and also with the more popular Pseudo-Relaxation Learning Algorithm for BAM (PRLAB). It also compares the PPAM with a more recent associative memory architecture called SOIAM. Results of training for object-avoidance are presented from simulations using player/stage and are verified by actual implementations on the E-Puck mobile robot. Finally, we show how the PPAM is capable of achieving an increase in performance without using the typical weighted-sum arithmetic operations or indeed any arithmetic operations

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Complex Neural Networks for Audio

    Get PDF
    Audio is represented in two mathematically equivalent ways: the real-valued time domain (i.e., waveform) and the complex-valued frequency domain (i.e., spectrum). There are advantages to the frequency-domain representation, e.g., the human auditory system is known to process sound in the frequency-domain. Furthermore, linear time-invariant systems are convolved with sources in the time-domain, whereas they may be factorized in the frequency-domain. Neural networks have become rather useful when applied to audio tasks such as machine listening and audio synthesis, which are related by their dependencies on high quality acoustic models. They ideally encapsulate fine-scale temporal structure, such as that encoded in the phase of frequency-domain audio, yet there are no authoritative deep learning methods for complex audio. This manuscript is dedicated to addressing the shortcoming. Chapter 2 motivates complex networks by their affinity with complex-domain audio, while Chapter 3 contributes methods for building and optimizing complex networks. We show that the naive implementation of Adam optimization is incorrect for complex random variables and show that selection of input and output representation has a significant impact on the performance of a complex network. Experimental results with novel complex neural architectures are provided in the second half of this manuscript. Chapter 4 introduces a complex model for binaural audio source localization. We show that, like humans, the complex model can generalize to different anatomical filters, which is important in the context of machine listening. The complex model\u27s performance is better than that of the real-valued models, as well as real- and complex-valued baselines. Chapter 5 proposes a two-stage method for speech enhancement. In the first stage, a complex-valued stochastic autoencoder projects complex vectors to a discrete space. In the second stage, long-term temporal dependencies are modeled in the discrete space. The autoencoder raises the performance ceiling for state of the art speech enhancement, but the dynamic enhancement model does not outperform other baselines. We discuss areas for improvement and note that the complex Adam optimizer improves training convergence over the naive implementation

    Analysing and enhancing the performance of associative memory architectures

    Get PDF
    This thesis investigates the way in which information about the structure of a set of training data with 'natural' characteristics may be used to positively influence the design of associative memory neural network models of the Hopfield type. This is done with a view to reducing the level of connectivity in models of this type. There are three strands to this work. Firstly, an empirical evaluation of the implementation of existing theory is given. Secondly, a number of existing theories are combined to produce novel network models and training regimes. Thirdly, new strategies for constructing and training associative memories based on knowledge of the structure of the training data are proposed. The first conclusion of this work is that, under certain circumstances, performance benefits may be gained by establishing the connectivity in a non-random fashion, guided by the knowledge gained from the structure of the training data. These performance improvements exist in relation to networks in which sparse connectivity is established in a purely random manner. This dilution occurs prior to the training of the network. Secondly, it is verified that, as predicted by existing theory, targeted post-training dilution of network connectivity provides greater performance when compared with networks in which connections are removed at random. Finally, an existing tool for the analysis of the attractor performance of neural networks of this type has been modified and improved. Furthermore, a novel, comprehensive performance analysis tool is proposed

    A General Return-Mapping Framework for Fractional Visco-Elasto-Plasticity

    Full text link
    We develop a fractional return-mapping framework for power-law visco-elasto-plasticity. In our approach, the fractional viscoelasticity is accounted through canonical combinations of Scott-Blair elements to construct a series of well-known fractional linear viscoelastic models, such as Kelvin-Voigt, Maxwell, Kelvin-Zener and Poynting-Thomson. We also consider a fractional quasi-linear version of Fung's model to account for stress/strain nonlinearity. The fractional viscoelastic models are combined with a fractional visco-plastic device, coupled with fractional viscoelastic models involving serial combinations of Scott-Blair elements. We then develop a general return-mapping procedure, which is fully implicit for linear viscoelastic models, and semi-implicit for the quasi-linear case. We find that, in the correction phase, the discrete stress projection and plastic slip have the same form for all the considered models, although with different property and time-step dependent projection terms. A series of numerical experiments is carried out with analytical and reference solutions to demonstrate the convergence and computational cost of the proposed framework, which is shown to be at least first-order accurate for general loading conditions. Our numerical results demonstrate that the developed framework is more flexible, preserves the numerical accuracy of existing approaches while being more computationally tractable in the visco-plastic range due to a reduction of 50%50\% in CPU time. Our formulation is especially suited for emerging applications of fractional calculus in bio-tissues that present the hallmark of multiple viscoelastic power-laws coupled with visco-plasticity

    Continuous-valued probabilistic neural computation in VLSI

    Get PDF
    corecore