33,074 research outputs found

    Degeneracy: a link between evolvability, robustness and complexity in biological systems

    Get PDF
    A full accounting of biological robustness remains elusive; both in terms of the mechanisms by which robustness is achieved and the forces that have caused robustness to grow over evolutionary time. Although its importance to topics such as ecosystem services and resilience is well recognized, the broader relationship between robustness and evolution is only starting to be fully appreciated. A renewed interest in this relationship has been prompted by evidence that mutational robustness can play a positive role in the discovery of adaptive innovations (evolvability) and evidence of an intimate relationship between robustness and complexity in biology. This paper offers a new perspective on the mechanics of evolution and the origins of complexity, robustness, and evolvability. Here we explore the hypothesis that degeneracy, a partial overlap in the functioning of multi-functional components, plays a central role in the evolution and robustness of complex forms. In support of this hypothesis, we present evidence that degeneracy is a fundamental source of robustness, it is intimately tied to multi-scaled complexity, and it establishes conditions that are necessary for system evolvability

    Aspects of Assembly and Cascaded Aspects of Assembly: Logical and Temporal Properties

    Full text link
    Highly dynamic computing environments, like ubiquitous and pervasive computing environments, require frequent adaptation of applications. This has to be done in a timely fashion, and the adaptation process must be as fast as possible and mastered. Moreover the adaptation process has to ensure a consistent result when finished whereas adaptations to be implemented cannot be anticipated at design time. In this paper we present our mechanism for self-adaptation based on the aspect oriented programming paradigm called Aspect of Assembly (AAs). Using AAs: (1) the adaptations process is fast and its duration is mastered; (2) adaptations' entities are independent of each other thanks to the weaver logical merging mechanism; and (3) the high variability of the software infrastructure can be managed using a mono or multi-cycle weaving approach.Comment: 14 pages, published in International Journal of Computer Science, Volume 8, issue 4, Jul 2011, ISSN 1694-081

    Models of everywhere revisited: a technological perspective

    Get PDF
    The concept ‘models of everywhere’ was first introduced in the mid 2000s as a means of reasoning about the environmental science of a place, changing the nature of the underlying modelling process, from one in which general model structures are used to one in which modelling becomes a learning process about specific places, in particular capturing the idiosyncrasies of that place. At one level, this is a straightforward concept, but at another it is a rich multi-dimensional conceptual framework involving the following key dimensions: models of everywhere, models of everything and models at all times, being constantly re-evaluated against the most current evidence. This is a compelling approach with the potential to deal with epistemic uncertainties and nonlinearities. However, the approach has, as yet, not been fully utilised or explored. This paper examines the concept of models of everywhere in the light of recent advances in technology. The paper argues that, when first proposed, technology was a limiting factor but now, with advances in areas such as Internet of Things, cloud computing and data analytics, many of the barriers have been alleviated. Consequently, it is timely to look again at the concept of models of everywhere in practical conditions as part of a trans-disciplinary effort to tackle the remaining research questions. The paper concludes by identifying the key elements of a research agenda that should underpin such experimentation and deployment

    Towards critical event monitoring, detection and prediction for self-adaptive future Internet applications

    No full text
    The Future Internet (FI) will be composed of a multitude of diverse types of services that offer flexible, remote access to software features, content, computing resources, and middleware solutions through different cloud delivery models, such as IaaS, PaaS and SaaS. Ultimately, this means that loosely coupled Internet services will form a comprehensive base for developing value added applications in an agile way. Unlike traditional application development, which uses computing resources and software components under local administrative control, FI applications will thus strongly depend on third-party services. To maintain their quality of service, those applications therefore need to dynamically and autonomously adapt to an unprecedented level of changes that may occur during runtime. In this paper, we present our recent experiences on monitoring, detection, and prediction of critical events for both software services and multimedia applications. Based on these findings we introduce potential directions for future research on self-adaptive FI applications, bringing together those research directions

    A New Approach for Quality Management in Pervasive Computing Environments

    Full text link
    This paper provides an extension of MDA called Context-aware Quality Model Driven Architecture (CQ-MDA) which can be used for quality control in pervasive computing environments. The proposed CQ-MDA approach based on ContextualArchRQMM (Contextual ARCHitecture Quality Requirement MetaModel), being an extension to the MDA, allows for considering quality and resources-awareness while conducting the design process. The contributions of this paper are a meta-model for architecture quality control of context-aware applications and a model driven approach to separate architecture concerns from context and quality concerns and to configure reconfigurable software architectures of distributed systems. To demonstrate the utility of our approach, we use a videoconference system.Comment: 10 pages, 10 Figures, Oral Presentation in ECSA 201

    Degeneracy: a design principle for achieving robustness and evolvability

    Full text link
    Robustness, the insensitivity of some of a biological system's functionalities to a set of distinct conditions, is intimately linked to fitness. Recent studies suggest that it may also play a vital role in enabling the evolution of species. Increasing robustness, so is proposed, can lead to the emergence of evolvability if evolution proceeds over a neutral network that extends far throughout the fitness landscape. Here, we show that the design principles used to achieve robustness dramatically influence whether robustness leads to evolvability. In simulation experiments, we find that purely redundant systems have remarkably low evolvability while degenerate, i.e. partially redundant, systems tend to be orders of magnitude more evolvable. Surprisingly, the magnitude of observed variation in evolvability can neither be explained by differences in the size nor the topology of the neutral networks. This suggests that degeneracy, a ubiquitous characteristic in biological systems, may be an important enabler of natural evolution. More generally, our study provides valuable new clues about the origin of innovations in complex adaptive systems.Comment: Accepted in the Journal of Theoretical Biology (Nov 2009

    Adaptive planning for distributed systems using goal accomplishment tracking

    Get PDF
    Goal accomplishment tracking is the process of monitoring the progress of a task or series of tasks towards completing a goal. Goal accomplishment tracking is used to monitor goal progress in a variety of domains, including workflow processing, teleoperation and industrial manufacturing. Practically, it involves the constant monitoring of task execution, analysis of this data to determine the task progress and notification of interested parties. This information is usually used in a passive way to observe goal progress. However, responding to this information may prevent goal failures. In addition, responding proactively in an opportunistic way can also lead to goals being completed faster. This paper proposes an architecture to support the adaptive planning of tasks for fault tolerance or opportunistic task execution based on goal accomplishment tracking. It argues that dramatically increased performance can be gained by monitoring task execution and altering plans dynamically

    INDEMICS: An Interactive High-Performance Computing Framework for Data Intensive Epidemic Modeling

    Get PDF
    We describe the design and prototype implementation of Indemics (_Interactive; Epi_demic; _Simulation;)—a modeling environment utilizing high-performance computing technologies for supporting complex epidemic simulations. Indemics can support policy analysts and epidemiologists interested in planning and control of pandemics. Indemics goes beyond traditional epidemic simulations by providing a simple and powerful way to represent and analyze policy-based as well as individual-based adaptive interventions. Users can also stop the simulation at any point, assess the state of the simulated system, and add additional interventions. Indemics is available to end-users via a web-based interface. Detailed performance analysis shows that Indemics greatly enhances the capability and productivity of simulating complex intervention strategies with a marginal decrease in performance. We also demonstrate how Indemics was applied in some real case studies where complex interventions were implemented
    • 

    corecore