19 research outputs found

    On autonomic platform-as-a-service: characterisation and conceptual model

    Get PDF
    In this position paper, we envision a Platform-as-a-Service conceptual and architectural solution for large-scale and data intensive applications. Our architectural approach is based on autonomic principles, therefore, its ultimate goal is to reduce human intervention, the cost, and the perceived complexity by enabling the autonomic platform to manage such applications itself in accordance with highlevel policies. Such policies allow the platform to (i) interpret the application specifications; (ii) to map the specifications onto the target computing infrastructure, so that the applications are executed and their Quality of Service (QoS), as specified in their SLA, enforced; and, most importantly, (iii) to adapt automatically such previously established mappings when unexpected behaviours violate the expected. Such adaptations may involve modifications in the arrangement of the computational infrastructure, i.e. by re-designing a different communication network topology that dictates how computational resources interact, or even the live-migration to a different computational infrastructure. The ultimate goal of these challenges is to (de)provision computational machines, storage and networking links and their required topologies in order to supply for the application the virtualised infrastructure that better meets the SLAs. Generic architectural blueprints and principles have been provided for designing and implementing an autonomic computing system.We revisit them in order to provide a customised and specific viewfor PaaS platforms and integrate emerging paradigms such as DevOps for automate deployments, Monitoring as a Service for accurate and large-scale monitoring, or well-known formalisms such as Petri Nets for building performance models

    Construction of data streams applications from functional, non-functional and resource requirements for electric vehicle aggregators. the COSMOS vision

    Get PDF
    COSMOS, Computer Science for Complex System Modeling, is a research team that has the mission of bridging the gap between formal methods and real problems. The goal is twofold: (1) a better management of the growing complexity of current systems; (2) a high quality of the implementation reducing the time to market. The COSMOS vision is to prove this approach in non-trivial industrial problems leveraging technologies such as software engineering, cloud computing, or workflows. In particular, we are interested in the technological challenges arising from the Electric Vehicle (EV) industry, around the EV-charging and control IT infrastructure

    A hierarchical one-to-one mapping solution for semantic interoperability

    Get PDF
    The importance of interoperability among computer systems has been progressively increasing over the last years. The tendency of current cataloguing systems is to interchange metadata in XML according to the specific standard required by each user on demand. According to the research literature, it seems that there exist two main approaches in order to tackle this problem: solutions that are based on the use of ontologies and solutions that are based on the creation of specific crosswalks for one-to-one mapping. This paper proposes a hierarchical one-to-one mapping solution for improving semantic interoperability

    El papel del Dublin Core en el desarrollo de las infraestructuras de datos espaciales

    Get PDF
    Las tendencias actuales de caracterización de recursos de información geográfica para su oferta a través de Infraestructuras de Datos Espaciales (IDE) se centran en la información geográfica más tradicional (mapas, coberturas, modelos digitales del terreno, etc.). Para ello han utilizado como soporte de descripción los trabajos del grupo TC 211 de ISO (fundamentalmente ISO 19115). No obstante, todavía queda una ingente cantidad de servicios e información más heterogéneos susceptibles de ser ofrecidos a través de una IDE. El reto que se plantea en estos momentos es la definición de una estrategia que posibilite la incorporación de estas fuentes y servicios de información garantizando la interoperabilidad entre sistemas, sin que ello vaya en detrimento de una descripción o caracterización de recursos, adecuada, completa y suficiente. Es en este contexto donde Dublin Core puede jugar un papel fundamental como norma de metadatos (ISO 15836) de propósito general, fomentando la interoperabilidad en distintos dominios informativos, entre ellos también la información geoespacial. El objetivo de este capítulo es presentar un modelo de utilización del conjunto de elementos y principios de Dublin Core como base para el proceso de asignación de metadatos asociados a todo tipo de recursos en el contexto de una IDE, así como las decisiones técnicas básicas que deberían tomarse para dar soporte a servicios de creación y búsqueda de información sobre este modelo general propuesto.Este trabajo ha sido parcialmente financiado por el proyecto TIC2003-09365-C02-01 del Plan Nacional de Investigación Científica y Desarrollo Tecnológico del Ministerio de Educación y Ciencia de Españ

    Towards an Architecture Proposal for Federation of Distributed DES Simulators

    Get PDF
    The simulation of large and complex Discrete Event Systems (DESs) increasingly imposes more demanding and urgent requirements on two aspects accepted as critical: (1) Intensive use of models of the simulated system that can be exploited in all phases of its life cycle where simulation can be used, and methodologies for these purposes; (2) Adaptation of simulation techniques to HPC infrastructures, as a method to improve simulation efficiency and to have scalable simulation environments. This paper proposes a Model Driven Engineering approach (MDE) based on Petri Nets (PNs) as formal model. This approach proposes a domain specific language based on modular PNs from which efficient distributed simulation code is generated in an automatic way. The distributed simulator is constructed over generic simulation engines of PNs, each one containing a data structure representing a piece of net and its simulation state. The simulation engine is called simbot and versions of it are available for different platforms. The proposed architecture allows, in an efficient way, a dynamic load balancing of the simulation work because the moving of PN pieces can be realized by moving a small number of integers representing the subnet and its state

    Message From The Mgc 2012 Editors

    No full text
    [No abstract available]Professional,IFIP,USENI

    Modelling performance & resource management in kubernetes

    No full text
    Containers are rapidly replacing Virtual Machines (VMs) as the compute instance of choice in cloud-based deployments. The significantly lower overhead of deploying containers (compared to VMs) has often been cited as one reason for this. We analyse performance of the Kubernetes system and develop a Reference net-based model of resource management within this system. Our model is characterised using real data from a Kubernetes deployment, and can be used as a basis to design scalable applications that make use of Kubernetes

    Adaptive exception handling for scientific workflows

    Get PDF
    Scientific workflow systems often operate in highly unreliable, heterogeneous and dynamic environments, and have accordingly incorporated different fault tolerance techniques. We propose an exception-handling mechanism, based on techniques adopted in programming languages, for modifying at run-time the structure of a workflow. In contrast to other proposals that achieve the required flexibility by means of the infrastructure, our proposal expresses the exception-handling mechanism within the workflow language - primarily as two exception-handling patterns that are exclusively based on the Reference Netswithin-Nets formalism (a specific type of Petri nets). When an exception is detected, a workflow in our approach can be re-written (replaced), based on the particular failure condition that has been detected. This enables workflow users to have better control and understanding of the behaviour of their workflow without having to be aware of the underlying infrastructure

    Client-side scheduling based on application characterization on kubernetes

    No full text
    In container management systems, such as Kubernetes, the scheduler has to place containers in physical machines and it should be aware of the degradation in performance caused by placing together containers that are barely isolated. We propose that clients provide a characterization of their applications to allow a scheduler to evaluate what is the best configuration to deal with the workload at a given moment. The default Kubernetes Scheduler only takes into account the sum of requested resources in each machine, which is insufficient to deal with the performance degradation. In this paper, we show how specifying resource limits is not enough to avoid resource contention, and we propose the architecture of a scheduler, based on the client application characterization, to avoid the resource contention
    corecore