3,444 research outputs found

    A Model of Emotion as Patterned Metacontrol

    Get PDF
    Adaptive systems use feedback as a key strategy to cope with uncertainty and change in their environments. The information fed back from the sensorimotor loop into the control architecture can be used to change different elements of the controller at four different levels: parameters of the control model, the control model itself, the functional organization of the agent and the functional components of the agent. The complexity of such a space of potential configurations is daunting. The only viable alternative for the agent ?in practical, economical, evolutionary terms? is the reduction of the dimensionality of the configuration space. This reduction is achieved both by functionalisation —or, to be more precise, by interface minimization— and by patterning, i.e. the selection among a predefined set of organisational configurations. This last analysis let us state the central problem of how autonomy emerges from the integration of the cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. In this paper we will show a general model of how the emotional biological systems operate following this theoretical analysis and how this model is also of applicability to a wide spectrum of artificial systems

    Articulating: the neural mechanisms of speech production

    Full text link
    Speech production is a highly complex sensorimotor task involving tightly coordinated processing across large expanses of the cerebral cortex. Historically, the study of the neural underpinnings of speech suffered from the lack of an animal model. The development of non-invasive structural and functional neuroimaging techniques in the late 20th century has dramatically improved our understanding of the speech network. Techniques for measuring regional cerebral blood flow have illuminated the neural regions involved in various aspects of speech, including feedforward and feedback control mechanisms. In parallel, we have designed, experimentally tested, and refined a neural network model detailing the neural computations performed by specific neuroanatomical regions during speech. Computer simulations of the model account for a wide range of experimental findings, including data on articulatory kinematics and brain activity during normal and perturbed speech. Furthermore, the model is being used to investigate a wide range of communication disorders.R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; R01 DC016270 - NIDCD NIH HHSAccepted manuscrip

    On the Speed of Neuronal Populations

    Get PDF

    The morphofunctional approach to emotion modelling in robotics

    Get PDF
    In this conceptual paper, we discuss two areas of research in robotics, robotic models of emotion and morphofunctional machines, and we explore the scope for potential cross-fertilization between them. We shift the focus in robot models of emotion from information-theoretic aspects of appraisal to the interactive significance of bodily dispositions. Typical emotional phenomena such as arousal and action readiness can be interpreted as morphofunctional processes, and their functionality may be replicated in robotic systems with morphologies that can be modulated for real-time adaptation. We investigate the control requirements for such systems, and present a possible bio-inspired architecture, based on the division of control between neural and endocrine systems in humans and animals. We suggest that emotional epi- sodes can be understood as emergent from the coordination of action control and action-readiness, respectively. This stress on morphology complements existing research on the information-theoretic aspects of emotion

    Development of neural units with higher-order synaptic operations and their applications to logic circuits and control problems

    Get PDF
    Neural networks play an important role in the execution of goal-oriented paradigms. They offer flexibility, adaptability and versatility, so that a variety of approaches may be used to meet a specific goal, depending upon the circumstances and the requirements of the design specifications. Development of higher-order neural units with higher-order synaptic operations will open a new window for some complex problems such as control of aerospace vehicles, pattern recognition, and image processing. The neural models described in this thesis consider the behavior of a single neuron as the basic computing unit in neural information processing operations. Each computing unit in the network is based on the concept of an idealized neuron in the central nervous system (CNS). Most recent mathematical models and their architectures for neuro-control systems have generated many theoretical and industrial interests. Recent advances in static and dynamic neural networks have created a profound impact in the field of neuro-control. Neural networks consisting of several layers of neurons, with linear synaptic operation, have been extensively used in different applications such as pattern recognition, system identification and control of complex systems such as flexible structures, and intelligent robotic systems. The conventional linear neural models are highly simplified models of the biological neuron. Using this model, many neural morphologies, usually referred to as multilayer feedforward neural networks (MFNNs), have been reported in the literature. The performance of the neurons is greatly affected when a layer of neurons are implemented for system identification, pattern recognition and control problems. Through simulation studies of the XOR logic it was concluded that the neurons with linear synaptic operation are limited to only linearly separable forms of pattern distribution. However, they perform a variety of complex mathematical operations when they are implemented in the form of a network structure. These networks suffer from various limitations such as computational efficiency and learning capabilities and moreover, these models ignore many salient features of the biological neurons such as time delays, cross and self correlations, and feedback paths which are otherwise very important in the neural activity. In this thesis an effort is made to develop new mathematical models of neurons that belong to the class of higher-order neural units (HONUs) with higher-order synaptic operations such as quadratic and cubic synaptic operations. The advantage of using this type of neural unit is associated with performance of the neurons but the performance comes at the cost of exponential increase in parameters that hinders the speed of the training process. In this context, a novel method of representation of weight parameters without sacrificing the neural performance has been introduced. A generalised representation of the higher-order synaptic operation for these neural structures was proposed. It was shown that many existing neural structures can be derived from this generalized representation of the higher-order synaptic operation. In the late 1960’s, McCulloch and Pitts modeled the stimulation-response of the primitive neuron using the threshold logic. Since then, it has become a practice to implement the logic circuits using neural structures. In this research, realization of the logic circuits such as OR, AND, and XOR were implemented using the proposed neural structures. These neural structures were also implemented as neuro-controllers for the control problems such as satellite attitude control and model reference adaptive control. A comparative study of the performance of these neural structures compared to that of the conventional linear controllers has been presented. The simulation results obtained in this research were applicable only for the simplified model presented in the simulation studies

    Neural units with higher-order synaptic operations with applications to edge detection and control systems

    Get PDF
    The biological sense organ contains infinite potential. The artificial neural structures have emulated the potential of the central nervous system; however, most of the researchers have been using the linear combination of synaptic operation. In this thesis, this neural structure is referred to as the neural unit with linear synaptic operation (LSO). The objective of the research reported in this thesis is to develop novel neural units with higher-order synaptic operations (HOSO), and to explore their potential applications. The neural units with quadratic synaptic operation (QSO) and cubic synaptic operation (CSO) are developed and reported in this thesis. A comparative analysis is done on the neural units with LSO, QSO, and CSO. It is to be noted that the neural units with lower order synaptic operations are the subsets of the neural units with higher-order synaptic operations. It is found that for much more complex problems the neural units with higher-order synaptic operations are much more efficient than the neural units with lower order synaptic operations. Motivated by the intensity of the biological neural systems, the dynamic nature of the neural structure is proposed and implemented using the neural unit with CSO. The dynamic structure makes the system response relatively insensitive to external disturbances and internal variations in system parameters. With the success of these dynamic structures researchers are inclined to replace the recurrent (feedback) neural networks (NNs) in their present systems with the neural units with CSO. Applications of these novel dynamic neural structures are gaining potential in the areas of image processing for the machine vision and motion controls. One of the machine vision emulations from the biological attribution is edge detection. Edge detection of images is a significant component in the field of computer vision, remote sensing and image analysis. The neural units with HOSO do replicate some of the biological attributes for edge detection. Further more, the developments in robotics are gaining momentum in neural control applications with the introduction of mobile robots, which in turn use the neural units with HOSO; a CCD camera for the vision is implemented, and several photo-sensors are attached on the machine. In summary, it was demonstrated that the neural units with HOSO present the advanced control capability for the mobile robot with neuro-vision and neuro-control systems

    Embodied Artificial Intelligence through Distributed Adaptive Control: An Integrated Framework

    Full text link
    In this paper, we argue that the future of Artificial Intelligence research resides in two keywords: integration and embodiment. We support this claim by analyzing the recent advances of the field. Regarding integration, we note that the most impactful recent contributions have been made possible through the integration of recent Machine Learning methods (based in particular on Deep Learning and Recurrent Neural Networks) with more traditional ones (e.g. Monte-Carlo tree search, goal babbling exploration or addressable memory systems). Regarding embodiment, we note that the traditional benchmark tasks (e.g. visual classification or board games) are becoming obsolete as state-of-the-art learning algorithms approach or even surpass human performance in most of them, having recently encouraged the development of first-person 3D game platforms embedding realistic physics. Building upon this analysis, we first propose an embodied cognitive architecture integrating heterogenous sub-fields of Artificial Intelligence into a unified framework. We demonstrate the utility of our approach by showing how major contributions of the field can be expressed within the proposed framework. We then claim that benchmarking environments need to reproduce ecologically-valid conditions for bootstrapping the acquisition of increasingly complex cognitive skills through the concept of a cognitive arms race between embodied agents.Comment: Updated version of the paper accepted to the ICDL-Epirob 2017 conference (Lisbon, Portugal

    Adaptive, fast walking in a biped robot under neuronal control and learning

    Get PDF
    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks

    A Model of Emotion as Patterned Metacontrol

    Get PDF
    Adaptive agents use feedback as a key strategy to cope with un- certainty and change in their environments. The information fed back from the sensorimotor loop into the control subsystem can be used to change four different elements of the controller: parameters associated to the control model, the control model itself, the functional organization of the agent and the functional realization of the agent. There are many change alternatives and hence the complexity of the agent’s space of potential configurations is daunting. The only viable alternative for space- and time-constrained agents —in practical, economical, evolutionary terms— is to achieve a reduction of the dimensionality of this configuration space. Emotions play a critical role in this reduction. The reduction is achieved by func- tionalization, interface minimization and by patterning, i.e. by selection among a predefined set of organizational configurations. This analysis lets us state how autonomy emerges from the integration of cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. Emotion-based morphofunctional systems are able to exhibit complex adaptation patterns at a reduced cognitive cost. In this article we show a general model of how emotion supports functional adaptation and how the emotional biological systems operate following this theoretical model. We will also show how this model is also of applicability to the construction of a wide spectrum of artificial systems1
    corecore