55,273 research outputs found

    Structure Learning of a Behavior Network for Context Dependent Adaptability

    Get PDF
    One mechanism for an intelligent agent to adapt to substantial environmental changes is to change its decision making structure. Pervious work in this area has developed a context-dependent behavior selection architecture that uses structure change, i.e., changing the mutual inhibition structures of a behavior network, as the main mechanism to generate different behavior patterns according to different behavioral contexts. Given the important of network structure, this work investigates how the structure of a behavior network can be learned. We developed a structure learning method based on generic algorithm and applied it to a model crayfish that needs to survive in a simulated environment. The model crayfish is controlled by a mutual inhibition behavior network, whose structures are learned using the GA-based algorithm for different environment configurations. The results show that it is possible to learn robust and consistent network structures allowing intelligent agents to behave adaptively in a particular environment

    Practopoiesis: Or how life fosters a mind

    Get PDF
    The mind is a biological phenomenon. Thus, biological principles of organization should also be the principles underlying mental operations. Practopoiesis states that the key for achieving intelligence through adaptation is an arrangement in which mechanisms laying a lower level of organization, by their operations and interaction with the environment, enable creation of mechanisms lying at a higher level of organization. When such an organizational advance of a system occurs, it is called a traverse. A case of traverse is when plasticity mechanisms (at a lower level of organization), by their operations, create a neural network anatomy (at a higher level of organization). Another case is the actual production of behavior by that network, whereby the mechanisms of neuronal activity operate to create motor actions. Practopoietic theory explains why the adaptability of a system increases with each increase in the number of traverses. With a larger number of traverses, a system can be relatively small and yet, produce a higher degree of adaptive/intelligent behavior than a system with a lower number of traverses. The present analyses indicate that the two well-known traverses-neural plasticity and neural activity-are not sufficient to explain human mental capabilities. At least one additional traverse is needed, which is named anapoiesis for its contribution in reconstructing knowledge e.g., from long-term memory into working memory. The conclusions bear implications for brain theory, the mind-body explanatory gap, and developments of artificial intelligence technologies.Comment: Revised version in response to reviewer comment

    Identifying and addressing adaptability and information system requirements for tactical management

    Get PDF

    Dynamic reconfiguration of human brain networks during learning

    Get PDF
    Human learning is a complex phenomenon requiring flexibility to adapt existing brain function and precision in selecting new neurophysiological activities to drive desired behavior. These two attributes -- flexibility and selection -- must operate over multiple temporal scales as performance of a skill changes from being slow and challenging to being fast and automatic. Such selective adaptability is naturally provided by modular structure, which plays a critical role in evolution, development, and optimal network function. Using functional connectivity measurements of brain activity acquired from initial training through mastery of a simple motor skill, we explore the role of modularity in human learning by identifying dynamic changes of modular organization spanning multiple temporal scales. Our results indicate that flexibility, which we measure by the allegiance of nodes to modules, in one experimental session predicts the relative amount of learning in a future session. We also develop a general statistical framework for the identification of modular architectures in evolving systems, which is broadly applicable to disciplines where network adaptability is crucial to the understanding of system performance.Comment: Main Text: 19 pages, 4 figures Supplementary Materials: 34 pages, 4 figures, 3 table

    Improving the adaptability of simulated evolutionary swarm robots in dynamically changing environments

    Get PDF
    One of the important challenges in the field of evolutionary robotics is the development of systems that can adapt to a changing environment. However, the ability to adapt to unknown and fluctuating environments is not straightforward. Here, we explore the adaptive potential of simulated swarm robots that contain a genomic encoding of a bio-inspired gene regulatory network (GRN). An artificial genome is combined with a flexible agent-based system, representing the activated part of the regulatory network that transduces environmental cues into phenotypic behaviour. Using an artificial life simulation framework that mimics a dynamically changing environment, we show that separating the static from the conditionally active part of the network contributes to a better adaptive behaviour. Furthermore, in contrast with most hitherto developed ANN-based systems that need to re-optimize their complete controller network from scratch each time they are subjected to novel conditions, our system uses its genome to store GRNs whose performance was optimized under a particular environmental condition for a sufficiently long time. When subjected to a new environment, the previous condition-specific GRN might become inactivated, but remains present. This ability to store 'good behaviour' and to disconnect it from the novel rewiring that is essential under a new condition allows faster re-adaptation if any of the previously observed environmental conditions is reencountered. As we show here, applying these evolutionary-based principles leads to accelerated and improved adaptive evolution in a non-stable environment

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    The role of career adaptability in skills supply

    Get PDF

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA

    Team Learning: A Theoretical Integration and Review

    Get PDF
    With the increasing emphasis on work teams as the primary architecture of organizational structure, scholars have begun to focus attention on team learning, the processes that support it, and the important outcomes that depend on it. Although the literature addressing learning in teams is broad, it is also messy and fraught with conceptual confusion. This chapter presents a theoretical integration and review. The goal is to organize theory and research on team learning, identify actionable frameworks and findings, and emphasize promising targets for future research. We emphasize three theoretical foci in our examination of team learning, treating it as multilevel (individual and team, not individual or team), dynamic (iterative and progressive; a process not an outcome), and emergent (outcomes of team learning can manifest in different ways over time). The integrative theoretical heuristic distinguishes team learning process theories, supporting emergent states, team knowledge representations, and respective influences on team performance and effectiveness. Promising directions for theory development and research are discussed
    • …
    corecore