11,602 research outputs found

    Dynamic reconfiguration of functional brain networks during working memory training

    Get PDF

    Contract Aware Components, 10 years after

    Get PDF
    The notion of contract aware components has been published roughly ten years ago and is now becoming mainstream in several fields where the usage of software components is seen as critical. The goal of this paper is to survey domains such as Embedded Systems or Service Oriented Architecture where the notion of contract aware components has been influential. For each of these domains we briefly describe what has been done with this idea and we discuss the remaining challenges.Comment: In Proceedings WCSI 2010, arXiv:1010.233

    Session Types with Runtime Adaptation: Overview and Examples

    Full text link
    In recent work, we have developed a session types discipline for a calculus that features the usual constructs for session establishment and communication, but also two novel constructs that enable communicating processes to be stopped, duplicated, or discarded at runtime. The aim is to understand whether known techniques for the static analysis of structured communications scale up to the challenging context of context-aware, adaptable distributed systems, in which disciplined interaction and runtime adaptation are intertwined concerns. In this short note, we summarize the main features of our session-typed framework with runtime adaptation, and recall its basic correctness properties. We illustrate our framework by means of examples. In particular, we present a session representation of supervision trees, a mechanism for enforcing fault-tolerant applications in the Erlang language.Comment: In Proceedings PLACES 2013, arXiv:1312.221

    Adaptable processes

    Get PDF
    We propose the concept of adaptable processes as a way of overcoming the limitations that process calculi have for describing patterns of dynamic process evolution. Such patterns rely on direct ways of controlling the behavior and location of running processes, and so they are at the heart of the adaptation capabilities present in many modern concurrent systems. Adaptable processes have a location and are sensible to actions of dynamic update at runtime; this allows to express a wide range of evolvability patterns for concurrent processes. We introduce a core calculus of adaptable processes and propose two verification problems for them: bounded and eventual adaptation. While the former ensures that the number of consecutive erroneous states that can be traversed during a computation is bound by some given number k, the latter ensures that if the system enters into a state with errors then a state without errors will be eventually reached. We study the (un)decidability of these two problems in several variants of the calculus, which result from considering dynamic and static topologies of adaptable processes as well as different evolvability patterns. Rather than a specification language, our calculus intends to be a basis for investigating the fundamental properties of evolvable processes and for developing richer languages with evolvability capabilities

    Seven properties of self-organization in the human brain

    Get PDF
    The principle of self-organization has acquired a fundamental significance in the newly emerging field of computational philosophy. Self-organizing systems have been described in various domains in science and philosophy including physics, neuroscience, biology and medicine, ecology, and sociology. While system architecture and their general purpose may depend on domain-specific concepts and definitions, there are (at least) seven key properties of self-organization clearly identified in brain systems: 1) modular connectivity, 2) unsupervised learning, 3) adaptive ability, 4) functional resiliency, 5) functional plasticity, 6) from-local-to-global functional organization, and 7) dynamic system growth. These are defined here in the light of insight from neurobiology, cognitive neuroscience and Adaptive Resonance Theory (ART), and physics to show that self-organization achieves stability and functional plasticity while minimizing structural system complexity. A specific example informed by empirical research is discussed to illustrate how modularity, adaptive learning, and dynamic network growth enable stable yet plastic somatosensory representation for human grip force control. Implications for the design of “strong” artificial intelligence in robotics are brought forward

    Aspects of Assembly and Cascaded Aspects of Assembly: Logical and Temporal Properties

    Full text link
    Highly dynamic computing environments, like ubiquitous and pervasive computing environments, require frequent adaptation of applications. This has to be done in a timely fashion, and the adaptation process must be as fast as possible and mastered. Moreover the adaptation process has to ensure a consistent result when finished whereas adaptations to be implemented cannot be anticipated at design time. In this paper we present our mechanism for self-adaptation based on the aspect oriented programming paradigm called Aspect of Assembly (AAs). Using AAs: (1) the adaptations process is fast and its duration is mastered; (2) adaptations' entities are independent of each other thanks to the weaver logical merging mechanism; and (3) the high variability of the software infrastructure can be managed using a mono or multi-cycle weaving approach.Comment: 14 pages, published in International Journal of Computer Science, Volume 8, issue 4, Jul 2011, ISSN 1694-081

    A morphospace of functional configuration to assess configural breadth based on brain functional networks

    Get PDF
    The best approach to quantify human brain functional reconfigurations in response to varying cognitive demands remains an unresolved topic in network neuroscience. We propose that such functional reconfigurations may be categorized into three different types: i) Network Configural Breadth, ii) Task-to-Task transitional reconfiguration, and iii) Within-Task reconfiguration. In order to quantify these reconfigurations, we propose a mesoscopic framework focused on functional networks (FNs) or communities. To do so, we introduce a 2D network morphospace that relies on two novel mesoscopic metrics, Trapping Efficiency (TE) and Exit Entropy (EE), which capture topology and integration of information within and between a reference set of FNs. In this study, we use this framework to quantify the Network Configural Breadth across different tasks. We show that the metrics defining this morphospace can differentiate FNs, cognitive tasks and subjects. We also show that network configural breadth significantly predicts behavioral measures, such as episodic memory, verbal episodic memory, fluid intelligence and general intelligence. In essence, we put forth a framework to explore the cognitive space in a comprehensive manner, for each individual separately, and at different levels of granularity. This tool that can also quantify the FN reconfigurations that result from the brain switching between mental states.Comment: main article: 24 pages, 8 figures, 2 tables. supporting information: 11 pages, 5 figure
    corecore