78 research outputs found

    On autonomic platform-as-a-service: characterisation and conceptual model

    Get PDF
    In this position paper, we envision a Platform-as-a-Service conceptual and architectural solution for large-scale and data intensive applications. Our architectural approach is based on autonomic principles, therefore, its ultimate goal is to reduce human intervention, the cost, and the perceived complexity by enabling the autonomic platform to manage such applications itself in accordance with highlevel policies. Such policies allow the platform to (i) interpret the application specifications; (ii) to map the specifications onto the target computing infrastructure, so that the applications are executed and their Quality of Service (QoS), as specified in their SLA, enforced; and, most importantly, (iii) to adapt automatically such previously established mappings when unexpected behaviours violate the expected. Such adaptations may involve modifications in the arrangement of the computational infrastructure, i.e. by re-designing a different communication network topology that dictates how computational resources interact, or even the live-migration to a different computational infrastructure. The ultimate goal of these challenges is to (de)provision computational machines, storage and networking links and their required topologies in order to supply for the application the virtualised infrastructure that better meets the SLAs. Generic architectural blueprints and principles have been provided for designing and implementing an autonomic computing system.We revisit them in order to provide a customised and specific viewfor PaaS platforms and integrate emerging paradigms such as DevOps for automate deployments, Monitoring as a Service for accurate and large-scale monitoring, or well-known formalisms such as Petri Nets for building performance models

    Model-driven development of data intensive applications over cloud resources

    Full text link
    The proliferation of sensors over the last years has generated large amounts of raw data, forming data streams that need to be processed. In many cases, cloud resources are used for such processing, exploiting their flexibility, but these sensor streaming applications often need to support operational and control actions that have real-time and low-latency requirements that go beyond the cost effective and flexible solutions supported by existing cloud frameworks, such as Apache Kafka, Apache Spark Streaming, or Map-Reduce Streams. In this paper, we describe a model-driven and stepwise refinement methodological approach for streaming applications executed over clouds. The central role is assigned to a set of Petri Net models for specifying functional and non-functional requirements. They support model reuse, and a way to combine formal analysis, simulation, and approximate computation of minimal and maximal boundaries of non-functional requirements when the problem is either mathematically or computationally intractable. We show how our proposal can assist developers in their design and implementation decisions from a performance perspective. Our methodology allows to conduct performance analysis: The methodology is intended for all the engineering process stages, and we can (i) analyse how it can be mapped onto cloud resources, and (ii) obtain key performance indicators, including throughput or economic cost, so that developers are assisted in their development tasks and in their decision taking. In order to illustrate our approach, we make use of the pipelined wavefront array.Comment: Preprin

    Model-driven development of data intensive applications over cloud resources

    Get PDF
    The proliferation of sensors over the last years has generated large amounts of raw data, forming data streams that need to be processed. In many cases, cloud resources are used for such processing, exploiting their flexibility, but these sensor streaming applications often need to support operational and control actions that have real-time and low-latency requirements that go beyond the cost effective and flexible solutions supported by existing cloud frameworks, such as Apache Kafka, Apache Spark Streaming, or Map-Reduce Streams. In this paper, we describe a model-driven and stepwise refinement methodological approach for streaming applications executed over clouds. The central role is assigned to a set of Petri Net models for specifying functional and non-functional requirements. They support model reuse, and a way to combine formal analysis, simulation, and approximate computation of minimal and maximal boundaries of non-functional requirements when the problem is either mathematically or computationally intractable. We show how our proposal can assist developers in their design and implementation decisions from a performance perspective. Our methodology allows to conduct performance analysis: The methodology is intended for all the engineering process stages, and we can (i) analyse how it can be mapped onto cloud resources, and (ii) obtain key performance indicators, including throughput or economic cost, so that developers are assisted in their development tasks and in their decision taking. In order to illustrate our approach, we make use of the pipelined wavefront array

    Modelado de aspectos sociales del comportamiento humano para mejorar las simulaciones del smart-grid

    Get PDF
    La respuesta a la demanda está recibiendo un interés creciente como una nueva forma de flexibilidad dentro de los sistemas de energías renovables. Es un nuevo paradigma de gestión de la red eléctrica que consiste en gestionar y controlar la demanda. Los modelos energéticos son una herramienta importante para evaluar la capacidad potencial de las contribuciones del lado de la demanda. Los modelos actuales basados en actividades asumen explícitamente una relación unidireccional causal entre actividades y dispositivos. Sin embargo, existe una clara falta de datos en los que basar esto, con el riesgo de perfiles de dispositivos simulados incorrectos. Existe un requisito para la recolección y comprensión de los datos que describen la relación entre las actividades y el uso de energía de los electrodomésticos, y cómo esto varía dentro y entre los hogares. Hasta la fecha, el foco ha estado localizado en torno a cuestiones fundamentalmente tecnológicas, minimizando el hecho de que la adopción de esta tecnología requiere al final de una interacción con seres humanos. Además, los sistemas de respuesta de la demanda todavía no están implantados de forma generalizada y se utilizan modelos para realizar simulaciones. En este proyecto se modelarán los supuestos socio-técnicos que sustentan los modelos de demanda de energía y unos perfiles de confort, de manera que podremos simular el grado de adopción y de satisfacción de la respuesta a la demanda respecto del confort térmico.<br /

    Desarrollo de un sistema de workflow científico que permita la ejecución flexible de tareas en un clúster de computadores

    Get PDF
    Los workflows científicos han surgido como una tecnología que da soporte computacional en la realización de experimentos científicos. Por una parte, un workflow puede verse como una especificación abstracta de un conjunto de tareas y sus dependencias entre sí. Estas dependencias establecen qué pasos deben realizarse para acometer un experimento científico. Por otro lado, un workflow puede verse como un programa, y un sistema de gestión de workflows como un entorno de programación especializado cuyo objetivo es la simplificación de las tareas de programación necesarias que tienen que realizar los científicos. Los sistemas de gestión de workflows científicos deben hacer una gestión eficiente de los recursos computacionales, la gestión de fallos, y así como la supervisión de los resultados intermedios y finales y la reproducibilidad total del experimento. La aproximación habitual para ejecutar un workflow en entornos de ejecución diferentes como clusters, grid o cloud consiste en la traducción de una especificación abstracta del workflow en una especificación concreta teniendo en cuenta los datos y los recursos. Sin embargo, estas aproximaciones suelen estar ligadas a soluciones concretas como la generación de un DAG (Grado dirigido acíclico) y su ejecución en un entorno High Throughput computing como HTCondor. Estas aproximaciones hacen que la monitorización, tratamientos de los fallos y gestión de recursos estén ligadas al entorno de ejecución y no a los aspectos de la especificación abstracta original. El objetivo de este proyecto es desarrollar un prototipo de sistema de workflow científico que permita definir políticas de gestión de recursos y de recuperación de fallos a nivel de aplicación. Por este motivo, se proporciona una especificación de workflows que es independiente del entorno de ejecución y que proporciona mecanismos de tolerancia a fallos para tratar fallos de aplicación (v. gr. gestión de excepciones). La especificación se realiza mediante Redes de Petri y Renew, y su diseño tendrá en cuenta que pueda ejecutarse en cualquier infraestructura: clúster Condor, contendores o incluso microservicios

    Feedback-control & queueing theory-based resource management for streaming applications

    Get PDF
    Recent advances in sensor technologies and instrumentation have led to an extraordinary growth of data sources and streaming applications. A wide variety of devices, from smart phones to dedicated sensors, have the capability of collecting and streaming large amounts of data at unprecedented rates. A number of distinct streaming data models have been proposed. Typical applications for this include smart cites & built environments for instance, where sensor-based infrastructures continue to increase in scale and variety. Understanding how such streaming content can be processed within some time threshold remains a non-trivial and important research topic. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering Quality of Service guarantees. We propose an autonomic controller (based on feedback control and queueing theory) to elastically provision virtual machines to meet performance targets associated with a particular data stream. Evaluation is carried out using a federated Cloud-based infrastructure (implemented using CometCloud) – where the allocation of new resources can be based on: (i) differences between sites, i.e. types of resources supported (e.g. GPU vs. CPU only), (ii) cost of execution; (iii) failure rate and likely resilience, etc. In particular, we demonstrate how Little’s Law –a widely used result in queuing theory– can be adapted to support dynamic control in the context of such resource provisioning

    Enforcing QoS in scientific workflow systems enacted over Cloud infrastructures

    Get PDF
    AbstractThe ability to support Quality of Service (QoS) constraints is an important requirement in some scientific applications. With the increasing use of Cloud computing infrastructures, where access to resources is shared, dynamic and provisioned on-demand, identifying how QoS constraints can be supported becomes an important challenge. However, access to dedicated resources is often not possible in existing Cloud deployments and limited QoS guarantees are provided by many commercial providers (often restricted to error rate and availability, rather than particular QoS metrics such as latency or access time). We propose a workflow system architecture which enforces QoS for the simultaneous execution of multiple scientific workflows over a shared infrastructure (such as a Cloud environment). Our approach involves multiple pipeline workflow instances, with each instance having its own QoS requirements. These workflows are composed of a number of stages, with each stage being mapped to one or more physical resources. A stage involves a combination of data access, computation and data transfer capability. A token bucket-based data throttling framework is embedded into the workflow system architecture. Each workflow instance stage regulates the amount of data that is injected into the shared resources, allowing for bursts of data to be injected while at the same time providing isolation of workflow streams. We demonstrate our approach by using the Montage workflow, and develop a Reference net model of the workflow

    A Specification Language for Performance and Economical Analysis of Short Term Data Intensive Energy Management Services

    Get PDF
    Requirements of Energy Management Services include short and long term processing of data in a massively interconnected scenario. The complexity and variety of short term applications needs methodologies that allow designers to reason about the models taking into account functional and non-functional requirements. In this paper we present a component based specification language for building trustworthy continuous dataflow applications. Component behaviour is defined by Petri Nets in order to translate to the methodology all the advantages derived from a mathematically based executable model to support analysis, verification, simulation and performance evaluation. The paper illustrates how to model and reason with specifications of advanced dataflow abstractions such as smart grids

    Modelado del comportamiento humano para la respuesta a la demanda de las redes eléctricas inteligentes mediante sistemas multi-agente

    Get PDF
    La vida de las personas ha cambiado desde la llegada de la energía eléctrica, con comodidades, facilidades y mejoras en la calidad de vida, a los hogares. Pero tiene el inconveniente de que es un recurso no almacenable. Por eso, a día de hoy, la energía eléctrica se produce según varía la demanda por parte de los usuarios, cuya estimación oscila siempre entre un mínimo y un máximo. Las redes eléctricas no pueden superar ese máximo de consumo energético y estamos ya muy próximos a él. Para evitar esa situación surge lo que se conoce como respuesta a la demanda, cuyo objetivo es el de controlar el consumo energético de los usuarios y adaptarlo a la disponibilidad de energía eléctrica. Los modelos existentes de la respuesta a la demanda, planteados desde un punto de vista técnico, muestran ciertas carencias en el modelizado del comportamiento de los usuarios, sin reflejar cómo su adopción puede afectar a su rutina. Es, por tanto, una aproximación no del todo realista.Desde este Trabajo Fin de Grado se ha propuesto un diseño de sistema que sí que tiene en cuenta los aspectos sociales y de comportamiento que se están ignorando en los modelos actuales, utilizando sistemas multiagente y lógica borrosa para representar a los usuarios y su razonamiento práctico, con la finalidad de ofrecer una visión más objetiva y más parecida a la realidad. Además, se realiza una prueba de concepto que ejemplifica cómo ciertos factores pueden influir en las decisiones de los usuarios y su interacción con la respuesta a la demanda.<br /

    Construction of data streams applications from functional, non-functional and resource requirements for electric vehicle aggregators. the COSMOS vision

    Get PDF
    COSMOS, Computer Science for Complex System Modeling, is a research team that has the mission of bridging the gap between formal methods and real problems. The goal is twofold: (1) a better management of the growing complexity of current systems; (2) a high quality of the implementation reducing the time to market. The COSMOS vision is to prove this approach in non-trivial industrial problems leveraging technologies such as software engineering, cloud computing, or workflows. In particular, we are interested in the technological challenges arising from the Electric Vehicle (EV) industry, around the EV-charging and control IT infrastructure
    corecore