41 research outputs found

    Computational neural learning formalisms for manipulator inverse kinematics

    Get PDF
    An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints

    Letting the Brain Speak for itself

    Get PDF
    Metaphors of Computation and Information tended to detract attention from the intrinsic modes of neural system functions, uncontaminated by the observer's role for collection and interpretation of experimental data. Recognizing the self-referential mode of function, and the propensity for self-organization to critical states requires a fundamental re-orientation with emphasis on the conceptual approaches of Complex System Dynamics. Accordingly, local cooperative processes, intrinsic to neural structures and of fractal nature, call for applying Fractional Calculus and models of Random Walks in Theoretical Neuroscience studies

    Automatic analysis of electronic drawings using neural network

    Get PDF
    Neural network technique has been found to be a powerful tool in pattern recognition. It captures associations or discovers regularities with a set of patterns, where the types, number of variables or diversity of the data are very great, the relationships between variables are vaguely understood, or the relationships are difficult to describe adequately with conventional approaches. In this dissertation, which is related to the research and the system design aiming at recognizing the digital gate symbols and characters in electronic drawings, we have proposed: (1) A modified Kohonen neural network with a shift-invariant capability in pattern recognition; (2) An effective approach to optimization of the structure of the back-propagation neural network; (3) Candidate searching and pre-processing techniques to facilitate the automatic analysis of the electronic drawings. An analysis and the system performance reveal that when the shift of an image pattern is not large, and the rotation is only by nx90°, (n = 1, 2, and 3), the modified Kohonen neural network is superior to the conventional Kohonen neural network in terms of shift-invariant and limited rotation-invariant capabilities. As a result, the dimensionality of the Kohonen layer can be reduced significantly compared with the conventional ones for the same performance. Moreover, the size of the subsequent neural network, say, back-propagation feed-forward neural network, can be decreased dramatically. There are no known rules for specifying the number of nodes in the hidden layers of a feed-forward neural network. Increasing the size of the hidden layer usually improves the recognition accuracy, while decreasing the size generally improves generalization capability. We determine the optimal size by simulation to attain a balance between the accuracy and generalization. This optimized back-propagation neural network outperforms the conventional ones designed by experience in general. In order to further reduce the computation complexity and save the calculation time spent in neural networks, pre-processing techniques have been developed to remove long circuit lines in the electronic drawings. This made the candidate searching more effective

    NEURAL NETWORKS FOR DECISION SUPPORT: PROBLEMS AND OPPORTUNITIES

    Get PDF
    Neural networks offer an approach to computing which - unlike conventional programming - does not necessitate a complete algorithmic specification. Furthermore, neural networks provide inductive means for gathering, storing, and using, experiential knowledge. Incidentally, these have also been some of the fundamental motivations for the development of decision support systems in general. Thus, the interest in neural networks for decision support is immediate and obvious. In this paper, we analyze the potential contribution of neural networks for decision support, on one hand, and point out at some inherent constraints that might inhibit their use, on the other. For the sake of completeness and organization, the analysis is carried out in the context of a general-purpose DSS framework that examines all the key factors that come into play in the design of any decision support system.Information Systems Working Papers Serie

    Stream segregation in the anesthetized auditory cortex

    Get PDF
    Auditory stream segregation describes the way that sounds are perceptually segregated into groups or streams on the basis of perceptual attributes such as pitch or spectral content. For sequences of pure tones, segregation depends on the tones' proximity in frequency and time. In the auditory cortex (and elsewhere) responses to sequences of tones are dependent on stimulus conditions in a similar way to the perception of these stimuli. However, although highly dependent on stimulus conditions, perception is also clearly influenced by factors unrelated to the stimulus, such as attention. Exactly how ‘bottom-up’ sensory processes and non-sensory ‘top-down’ influences interact is still not clear. Here, we recorded responses to alternating tones (ABAB …) of varying frequency difference (FD) and rate of presentation (PR) in the auditory cortex of anesthetized guinea-pigs. These data complement previous studies, in that top-down processing resulting from conscious perception should be absent or at least considerably attenuated. Under anesthesia, the responses of cortical neurons to the tone sequences adapted rapidly, in a manner sensitive to both the FD and PR of the sequences. While the responses to tones at frequencies more distant from neuron best frequencies (BFs) decreased as the FD increased, the responses to tones near to BF increased, consistent with a release from adaptation, or forward suppression. Increases in PR resulted in reductions in responses to all tones, but the reduction was greater for tones further from BF. Although asymptotically adapted responses to tones showed behavior that was qualitatively consistent with perceptual stream segregation, responses reached asymptote within 2 s, and responses to all tones were very weak at high PRs (>12 tones per second). A signal-detection model, driven by the cortical population response, made decisions that were dependent on both FD and PR in ways consistent with perceptual stream segregation. This included showing a range of conditions over which decisions could be made either in favor of perceptual integration or segregation, depending on the model ‘decision criterion’. However, the rate of ‘build-up’ was more rapid than seen perceptually, and at high PR responses to tones were sometimes so weak as to be undetectable by the model. Under anesthesia, adaptation occurs rapidly, and at high PRs tones are generally poorly represented, which compromises the interpretation of the experiment. However, within these limitations, these results complement experiments in awake animals and humans. They generally support the hypothesis that ‘bottom-up’ sensory processing plays a major role in perceptual organization, and that processes underlying stream segregation are active in the absence of attention

    Action selection in the rhythmic brain: The role of the basal ganglia and tremor.

    Get PDF
    Low-frequency oscillatory activity has been the target of extensive research both in cortical structures and in the basal ganglia (BG), due to numerous reports of associations with brain disorders and the normal functioning of the brain. Additionally, a plethora of evidence and theoretical work indicates that the BG might be the locus where conflicts between prospective actions are being resolved. Whereas a number of computational models of the BG investigate these phenomena, these models tend to focus on intrinsic oscillatory mechanisms, neglecting evidence that points to the cortex as the origin of this oscillatory behaviour. In this thesis, we construct a detailed neural model of the complete BG circuit based on fine-tuned spiking neurons, with both electrical and chemical synapses as well as short-term plasticity between structures. To do so, we build a complete suite of computational tools for the design, optimization and simulation of spiking neural networks. Our model successfully reproduces firing and oscillatory behaviour found in both the healthy and Parkinsonian BG, and it was used to make a number of biologically-plausible predictions. First, we investigate the influence of various cortical frequency bands on the intrinsic effective connectivity of the BG, as well as the role of the latter in regulating cortical behaviour. We found that, indeed, effective connectivity changes dramatically for different cortical frequency bands and phase offsets, which are able to modulate (or even block) information flow in the three major BG pathways. Our results indicate the existence of a multimodal gating mechanism at the level of the BG that can be entirely controlled by cortical oscillations, and provide evidence for the hypothesis of cortically-entrained but locally-generated subthalamic beta activity. Next, we explore the relationship of wave properties of entrained cortical inputs, dopamine and the transient effectiveness of the BG, when viewed as an action selection device. We found that cortical frequency, phase, dopamine and the examined time scale, all have a very important impact on the ability of our model to select. Our simulations resulted in a canonical profile of selectivity, which we termed selectivity portraits. Taking together, our results suggest that the cortex is the structure that determines whether action selection will be performed and what strategy will be utilized while the role of the BG is to perform this selection. Some frequency ranges promote the exploitation of actions of whom the outcome is known, others promote the exploration of new actions with high uncertainty while the remaining frequencies simply deactivate selection. Based on this behaviour, we propose a metaphor according to which, the basal ganglia can be viewed as the ''gearbox" of the cortex. Coalitions of rhythmic cortical areas are able to switch between a repertoire of available BG modes which, in turn, change the course of information flow back to and within the cortex. In the same context, dopamine can be likened to the ''control pedals" of action selection that either stop or initiate a decision. Finally, the frequency of active cortical areas that project to the BG acts as a gear lever, that instead of controlling the type and direction of thrust that the throttle provides to an automobile, it dictates the extent to which dopamine can trigger a decision, as well as what type of decision this will be. Finally, we identify a selection cycle with a period of around 200 ms, which was used to assess the biological plausibility of the most popular architectures in cognitive science. Using extensions of the BG model, we further propose novel mechanisms that provide explanations for (1) the two distinctive dynamical behaviours of neurons in globus pallidus external, and (2) the generation of resting tremor in Parkinson's disease. Our findings agree well with experimental observations, suggest new insights into the pathophysiology of specific BG disorders, provide new justifications for oscillatory phenomena related to decision making and reaffirm the role of the BG as the selection centre of the brain.Open Acces

    The investigation of variable nernst equilibria on isolated neurons and coupled neurons forming discrete and continuous networks

    Get PDF
    Since the introduction of the Hodgkin-Huxley equations, used to describe the excitation of neurons, the Nernst equilibria for individual ion channels have assumed to be constant in time. Recent biological recordings call into question the validity of this assumption. Very little theoretical work has been done to address the issue of accounting for these non-static Nernst equilibria using the Hodgkin-Huxley formalism. This body of work incorporates non-static Nernst equilibria into the generalized Hodgkin-Huxley formalism by considering the first-order effects of the Nernst equation. It is further demonstrated that these effects are likely dominate in neurons with diameters much smaller than that of the squid giant axon and permeate important information processing regions of the brain such as the hippocampus. Particular results of interest include single-cell bursting due to the interplay of spatially separated neurons, pattern formation via spiral waves within a soliton-like regime, and quantifiable shifts in the multifractality of hippocampal neurons under the administration of various drugs at varying dosages. This work provides a new perspective on the variability of Nernst equilibria and demonstrates its utility in areas such as pharmacology and information processing

    Image Description using Deep Neural Networks

    Get PDF
    Current research in computer vision and machine learning has demonstrated some great abilities at detecting and recognizing objects in natural images. Current state-of-the-art results in object detection, classification and localization in ImageNet Challenges have the validation accuracy for top 5 predictions for classification to be at 3.08% while similar classification experiments run by trained humans report an accuracy of 5.1%. While some people might argue that human accuracy is a function of training time it can be said with great confidence that automated classification models are at least as good as trained humans in classification problems. The ability of these models to analyze and describe complex images, however, is still an active area of research. Image description is a good starting point for imparting artificial intelligence to machines by allowing them to analyze and describe complex visual scenes. This thesis work introduces a generic end-to-end trainable Fusion-based Recurrent Multi-Modal (FRMM) architecture to address multi-modal applications. FRMM allows each input modality to be independent in terms of architecture, parameters and length of input sequences. FRMM image description models seamlessly blend convolutional neural network feature descriptors with sequential language data in a recurrent framework. In addition to introducing FRMMs, this work also analyzes the impact of varying activation functions and vocabulary size. For training and testing Flickr8k, Flickr30K and MSCOCO datasets have been used, demonstrating state-of-the-art description results

    Applications of Artificial Neural Networks to Synthetic Aperture Radar for Feature Extraction in Noisy Environments

    Get PDF
    It is often that images generated from Synthetic Aperture Radar (SAR) are noisy, distorted, or incomplete pictures of a target or target region. As the goal for most SAR research pertains to automatic target recognition (ATR), extensive filtering and image processing is required in order to extract the features necessary to carry out ATR. This thesis investigates the use of Artificial Neural Networks (ANNs) in order to improve upon the feature extraction process by laying the foundation for ANN SAR ATR algorithms and programs. The first technique investigated is that of an ANN edge detector designed to be invariant to multiplicative speckle noise. The algorithm designed uses the Back Propagation (BP) algorithm to teach a multi-layer perceptron network to detect edges. In order to do so, several parameters within a Sliding Window (SW), are calculated as the inputs to the ANN. The ANN then outputs an edge map that includes the outer edge features of the target as well as some internal edge features. The next technique that is examined is a pattern recognition and target reconstruction algorithm based off of the associative memory ANN known as the Hopfield Network (HN). For this version of the HN, the network is trained with a collection of varying geometric shapes. The output of the network is a nearest-fit representation of the incomplete image data input. Because of the versatility of this program, it is also able to reconstruct incomplete 3D models determined from SAR data. The final technique investigated is an automatic rotation procedure to detect the change in perspective relative to the platform. This type of detection can prove useful if used for target tracking or 3D modeling where the direction vector or relative angle of the target is a desired piece of information
    corecore