18 research outputs found

    Ant Colony Optimization Algorithm to Dynamic Energy Management in Cloud Data Center

    Get PDF
    With the wide deployment of cloud computing data centers, the problems of power consumption have become increasingly prominent. The dynamic energy management problem in pursuit of energy-efficiency in cloud data centers is investigated. Specifically, a dynamic energy management system model for cloud data centers is built, and this system is composed of DVS Management Module, Load Balancing Module, and Task Scheduling Module. According to Task Scheduling Module, the scheduling process is analyzed by Stochastic Petri Net, and a task-oriented resource allocation method (LET-ACO) is proposed, which optimizes the running time of the system and the energy consumption by scheduling tasks. Simulation studies confirm the effectiveness of the proposed system model. And the simulation results also show that, compared to ACO, Min-Min, and RR scheduling strategy, the proposed LET-ACO method can save up to 28%, 31%, and 40% energy consumption while meeting performance constraints

    A manifesto for future generation cloud computing: research directions for the next decade

    Get PDF
    The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing

    RAIDX: RAID EXTENDED FOR HETEROGENEOUS ARRAYS

    Get PDF
    The computer hard drive market has diversified with the establishment of solid state disks (SSDs) as an alternative to magnetic hard disks (HDDs). Each hard drive technology has its advantages: the SSDs are faster than HDDs but the HDDs are cheaper. Our goal is to construct a parallel storage system with HDDs and SSDs such that the parallel system is as fast as the SSDs. Achieving this goal is challenging since the slow HDDs store more data and become bottlenecks, while the SSDs remain idle. RAIDX is a parallel storage system designed for disks of different speeds, capacities and technologies. The RAIDX hardware consists of an array of disks; the RAIDX software consists of data structures and algorithms that allow the disks to be viewed as a single storage unit that has capacity equal to the sum of the capacities of its disks, failure rate lower than the failure rate of its individual disks, and speeds close to that of its faster disks. RAIDX achieves its performance goals with the aid of its novel parallel data organization technique that allows storage data to be moved on the fly without impacting the upper level file system. We show that storage data accesses satisfy the locality of reference principle, whereby only a small fraction of storage data are accessed frequently. RAIDX has a monitoring program that identifies frequently accessed blocks and a migration program that moves frequently accessed blocks to faster disks. The faster disks are caches that store the solo copy of frequently accessed data. Experimental evaluation has shown that a HDD+SSD RAIDX array is as fast as an all-SSD array when the workload shows locality of reference

    Investigations into Elasticity in Cloud Computing

    Full text link
    The pay-as-you-go model supported by existing cloud infrastructure providers is appealing to most application service providers to deliver their applications in the cloud. Within this context, elasticity of applications has become one of the most important features in cloud computing. This elasticity enables real-time acquisition/release of compute resources to meet application performance demands. In this thesis we investigate the problem of delivering cost-effective elasticity services for cloud applications. Traditionally, the application level elasticity addresses the question of how to scale applications up and down to meet their performance requirements, but does not adequately address issues relating to minimising the costs of using the service. With this current limitation in mind, we propose a scaling approach that makes use of cost-aware criteria to detect the bottlenecks within multi-tier cloud applications, and scale these applications only at bottleneck tiers to reduce the costs incurred by consuming cloud infrastructure resources. Our approach is generic for a wide class of multi-tier applications, and we demonstrate its effectiveness by studying the behaviour of an example electronic commerce site application. Furthermore, we consider the characteristics of the algorithm for implementing the business logic of cloud applications, and investigate the elasticity at the algorithm level: when dealing with large-scale data under resource and time constraints, the algorithm's output should be elastic with respect to the resource consumed. We propose a novel framework to guide the development of elastic algorithms that adapt to the available budget while guaranteeing the quality of output result, e.g. prediction accuracy for classification tasks, improves monotonically with the used budget.Comment: 211 pages, 27 tables, 75 figure

    Wide-Area Situation Awareness based on a Secure Interconnection between Cyber-Physical Control Systems

    Get PDF
    Posteriormente, examinamos e identificamos los requisitos especiales que limitan el diseño y la operación de una arquitectura de interoperabilidad segura para los SSC (particularmente los SCCF) del smart grid. Nos enfocamos en modelar requisitos no funcionales que dan forma a esta infraestructura, siguiendo la metodología NFR para extraer requisitos esenciales, técnicas para la satisfacción de los requisitos y métricas para nuestro modelo arquitectural. Estudiamos los servicios necesarios para la interoperabilidad segura de los SSC del SG revisando en profundidad los mecanismos de seguridad, desde los servicios básicos hasta los procedimientos avanzados capaces de hacer frente a las amenazas sofisticadas contra los sistemas de control, como son los sistemas de detección, protección y respuesta ante intrusiones. Nuestro análisis se divide en diferentes áreas: prevención, consciencia y reacción, y restauración; las cuales general un modelo de seguridad robusto para la protección de los sistemas críticos. Proporcionamos el diseño para un modelo arquitectural para la interoperabilidad segura y la interconexión de los SCCF del smart grid. Este escenario contempla la interconectividad de una federación de proveedores de energía del SG, que interactúan a través de la plataforma de interoperabilidad segura para gestionar y controlar sus infraestructuras de forma cooperativa. La plataforma tiene en cuenta las características inherentes y los nuevos servicios y tecnologías que acompañan al movimiento de la Industria 4.0. Por último, presentamos una prueba de concepto de nuestro modelo arquitectural, el cual ayuda a validar el diseño propuesto a través de experimentaciones. Creamos un conjunto de casos de validación que prueban algunas de las funcionalidades principales ofrecidas por la arquitectura diseñada para la interoperabilidad segura, proporcionando información sobre su rendimiento y capacidades.Las infraestructuras críticas (IICC) modernas son vastos sistemas altamente complejos, que precisan del uso de las tecnologías de la información para gestionar, controlar y monitorizar el funcionamiento de estas infraestructuras. Debido a sus funciones esenciales, la protección y seguridad de las infraestructuras críticas y, por tanto, de sus sistemas de control, se ha convertido en una tarea prioritaria para las diversas instituciones gubernamentales y académicas a nivel mundial. La interoperabilidad de las IICC, en especial de sus sistemas de control (SSC), se convierte en una característica clave para que estos sistemas sean capaces de coordinarse y realizar tareas de control y seguridad de forma cooperativa. El objetivo de esta tesis se centra, por tanto, en proporcionar herramientas para la interoperabilidad segura de los diferentes SSC, especialmente los sistemas de control ciber-físicos (SCCF), de forma que se potencie la intercomunicación y coordinación entre ellos para crear un entorno en el que las diversas infraestructuras puedan realizar tareas de control y seguridad cooperativas, creando una plataforma de interoperabilidad segura capaz de dar servicio a diversas IICC, en un entorno de consciencia situacional (del inglés situational awareness) de alto espectro o área (wide-area). Para ello, en primer lugar, revisamos las amenazas de carácter más sofisticado que amenazan la operación de los sistemas críticos, particularmente enfocándonos en los ciberataques camuflados (del inglés stealth) que amenazan los sistemas de control de infraestructuras críticas como el smart grid. Enfocamos nuestra investigación al análisis y comprensión de este nuevo tipo de ataques que aparece contra los sistemas críticos, y a las posibles contramedidas y herramientas para mitigar los efectos de estos ataques

    Cyber-Physical Threat Intelligence for Critical Infrastructures Security

    Get PDF
    Modern critical infrastructures comprise of many interconnected cyber and physical assets, and as such are large scale cyber-physical systems. Hence, the conventional approach of securing these infrastructures by addressing cyber security and physical security separately is no longer effective. Rather more integrated approaches that address the security of cyber and physical assets at the same time are required. This book presents integrated (i.e. cyber and physical) security approaches and technologies for the critical infrastructures that underpin our societies. Specifically, it introduces advanced techniques for threat detection, risk assessment and security information sharing, based on leading edge technologies like machine learning, security knowledge modelling, IoT security and distributed ledger infrastructures. Likewise, it presets how established security technologies like Security Information and Event Management (SIEM), pen-testing, vulnerability assessment and security data analytics can be used in the context of integrated Critical Infrastructure Protection. The novel methods and techniques of the book are exemplified in case studies involving critical infrastructures in four industrial sectors, namely finance, healthcare, energy and communications. The peculiarities of critical infrastructure protection in each one of these sectors is discussed and addressed based on sector-specific solutions. The advent of the fourth industrial revolution (Industry 4.0) is expected to increase the cyber-physical nature of critical infrastructures as well as their interconnection in the scope of sectorial and cross-sector value chains. Therefore, the demand for solutions that foster the interplay between cyber and physical security, and enable Cyber-Physical Threat Intelligence is likely to explode. In this book, we have shed light on the structure of such integrated security systems, as well as on the technologies that will underpin their operation. We hope that Security and Critical Infrastructure Protection stakeholders will find the book useful when planning their future security strategies

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases
    corecore