32 research outputs found

    Task Consolidation Algorithm for Heterogeneous Cloud Computing

    Get PDF
    As the recent advancements are going in the field of computer technologies like network devices, hardware capacities and software applications, cloud computing has emerged as an important paradigm that provides scalable and dynamic virtual resources to the users through the internet. Energy consumed by modern computer systems, particularly by servers in a cloud has almost reached at an unacceptable level. Also the energy consumed due to under utilization of resources accounts almost 60% of the energy consumed at peak load. It has resulted into reduced system reliability, extremely large electricity bills and environmental concerns because of resulting carbon emission. Therefore there is a great need to optimize energy consumption. Methods like memory compression, request discrimination, task consolidation among virtual machines are developed to enhance resource utilization. Task consolidation problem has been addressed as an optimization problem in heterogeneous cloud computing environment. Task consolidation maps user service requests to appropriate resources in cloud computing environment. The resource allocation problem in cloud computing environment is NP- complete. This thesis presents resource allocation problem as LPP to optimize energy consumed by the computing resources in cloud computing environment. We have used greedy algorithms to obtain sub-optimal solution for task consolidation problem. The performance of the task consolidation algorithm has been simulated with in-house simulator developed by us using Matlab. The simulation is carried out with three different arrival patterns and the result shows in favor of our proposed EATC(Energy Aware Task Consolidation) algorithm

    Resilient Service Embedding in IoT Networks

    Get PDF
    The Internet of Things (IoT) can support a significant number of services including those in smart homes and the automation of industries and public utilities. However, the growth of these deployments has posed a significant challenge especially in terms of how to build such deployments in a highly resilient manner. The IoT devices are prone to unpredicted failures and cyber-attacks, i.e. various types of damage, unreliable wireless connections, limited transmission power, computing ability, and storage space. Thus resilience is essential in IoT networks and in the services they support. In this paper, we introduce a new approach to resilience in IoT service embedding, based on traffic splitting. Our study assesses the power consumption associated with the services embedded and the data delivery time. The results are compared to recent approaches in resilience including redundancy and replication approaches. We constructed an optimization model whose goal is to determine the optimum physical resources to be used to embed the IoT virtual topology, where the latter is derived from a business process (BP). The embedding process makes use of the service-oriented architecture (SOA) paradigm. The physical resources of interest include IoT links and devices. The model made use of mixed integer linear programming (MILP) with an objective function that aimed to minimize both the total power consumption and the traffic latency. The optimization results show that the power consumption is reduced and the data delivery time is reduced in the service embedding approach where the proposed traffic splitting approach is employed resulting in the selection of energy efficient nodes and routes in the IoT network. Our methods resulted in up to 35% power saving compared to current methods and reduced the average traffic latency by up to 37% by selecting energy-efficient nodes and routes in IoT networks and by optimizing traffic flow to minimize latency

    Slowing down for performance and energy: an OS-centric study in network driven workloads

    Get PDF
    This paper studies three fundamental aspects of an OS that impact the performance and energy efficiency of network processing: 1) batching, 2) processor energy settings, and 3) the logic and instructions of the OS networking paths. A network device’s interrupt delay feature is used to induce batching and processor frequency is manipulated to control the speed of instruction execution. A baremetal library OS is used to explore OS path specialization. This study shows how careful use of batching and interrupt delay results in 2X energy and performance improvements across different workloads. Surprisingly, we find polling can be made energy efficient and can result in gains up to 11X over baseline Linux. We developed a methodology and a set of tools to collect system data in order to understand how energy is impacted at a fine-grained granularity. This paper identifies a number of other novel findings that have implications in OS design for networked applications and suggests a path forward to consider energy as a focal point of systems research.First author draf

    Multiprocessor System-on-Chips based Wireless Sensor Network Energy Optimization

    Get PDF
    Wireless Sensor Network (WSN) is an integrated part of the Internet-of-Things (IoT) used to monitor the physical or environmental conditions without human intervention. In WSN one of the major challenges is energy consumption reduction both at the sensor nodes and network levels. High energy consumption not only causes an increased carbon footprint but also limits the lifetime (LT) of the network. Network-on-Chip (NoC) based Multiprocessor System-on-Chips (MPSoCs) are becoming the de-facto computing platform for computationally extensive real-time applications in IoT due to their high performance and exceptional quality-of-service. In this thesis a task scheduling problem is investigated using MPSoCs architecture for tasks with precedence and deadline constraints in order to minimize the processing energy consumption while guaranteeing the timing constraints. Moreover, energy-aware nodes clustering is also performed to reduce the transmission energy consumption of the sensor nodes. Three distinct problems for energy optimization are investigated given as follows: First, a contention-aware energy-efficient static scheduling using NoC based heterogeneous MPSoC is performed for real-time tasks with an individual deadline and precedence constraints. An offline meta-heuristic based contention-aware energy-efficient task scheduling is developed that performs task ordering, mapping, and voltage assignment in an integrated manner. Compared to state-of-the-art scheduling our proposed algorithm significantly improves the energy-efficiency. Second, an energy-aware scheduling is investigated for a set of tasks with precedence constraints deploying Voltage Frequency Island (VFI) based heterogeneous NoC-MPSoCs. A novel population based algorithm called ARSH-FATI is developed that can dynamically switch between explorative and exploitative search modes at run-time. ARSH-FATI performance is superior to the existing task schedulers developed for homogeneous VFI-NoC-MPSoCs. Third, the transmission energy consumption of the sensor nodes in WSN is reduced by developing ARSH-FATI based Cluster Head Selection (ARSH-FATI-CHS) algorithm integrated with a heuristic called Novel Ranked Based Clustering (NRC). In cluster formation parameters such as residual energy, distance parameters, and workload on CHs are considered to improve LT of the network. The results prove that ARSH-FATI-CHS outperforms other state-of-the-art clustering algorithms in terms of LT.University of Derby, Derby, U

    New techniques to model energy-aware I/O architectures based on SSD and hard disk drives

    Get PDF
    For years, performance improvements at the computer I/O subsystem and at other subsystems have advanced at their own pace, being less the improvements at the I/O subsystem, and making the overall system speed dependant of the I/O subsystem speed. One of the main factors for this imbalance is the inherent nature of disk drives, which has allowed big advances in disk densities, but not so many in disk performance. Thus, to improve I/O subsystem performance, disk drives have become a goal of study for many researchers, having to use, in some cases, different kind of models. Other research studies aim to improve I/O subsystem performance by tuning more abstract I/O levels. Since disk drives lay behind those levels, real disk drives or just models need to be used. One of the most common techniques to evaluate the performance of a computer I/O subsystem is found on detailed simulation models including specific features of storage devices like disk geometry, zone splitting, caching, read-ahead buffers and request reordering. However, as soon as a new technological innovation is added, those models need to be reworked to include new characteristics, making difficult to have general models up to date. Our alternative is modeling a storage device as a black-box probabilistic model, where the storage device itself, its interface and the interconnection mechanisms are modeled as a single stochastic process, defining the service time as a random variable with an unknown distribution. This approach allows generating disk service times needing less computational power by means of a variate generator included in a simulator. This approach allows to reach a greater scalability in I/O subsystems performance evaluations by means of simulation. Lately, energy saving for computing systems has become an important need. In mobile computers, the battery life is limited to a certain amount of time, and not wasting energy at certain parts would extend the usage of the computer. Here, again the computer I/O subsystem has pointed out as field of study, because disk drives, which are a main part of it, are one of the most power consuming elements due to their mechanical nature. In server or enterprise computers, where the number of disks increase considerably, power saving may reduce cooling requirements for heat dissipation and thus, great monetary costs. This dissertation also considers the question of saving energy in the disk drive, by making advantage of diverse devices in hybrid storage systems, composed of Solid State Disks (SSDs) and Disk drives. SSDs and Disk drives offer different power characteristics, being SSDs much less power consuming than disk drives. In this thesis, several techniques that use SSDs as supporting devices for Disk drives, are proposed. Various options for managing SSDs and Disk devices in such hybrid systems are examinated, and it is shown that the proposed methods save energy and monetary costs in diverse scenarios. A simulator composed of Disks and SSD devices was implemented. This thesis studies the design and evaluation of the proposed approaches with the help of realistic workloads. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Durante años, las mejoras de rendimiento en el subsistema de E/S del ordenador y en otros subsistemas han avanzado a su propio ritmo, siendo menores las mejoras en el subsistema de E/S, y provocando que la velocidad global del sistema dependa de la velocidad del subsistema de E/S. Uno de los factores principales de este desequilibrio es la naturaleza inherente de las unidades de disco, la cual que ha permitido grandes avances en las densidades de disco, pero no así en su rendimiento. Por lo tanto, para mejorar el rendimiento del subsistema de E/S, las unidades de disco se han convertido en objetivo de estudio para muchos investigadores, que se ven obligados a utilizar, en algunos casos, diferentes tipos de modelos o simuladores. Otros estudios de investigación tienen como objetivo mejorar el rendimiento del subsistema de E/S, estudiando otros niveles más abstractos. Como los dispositivos de disco siguen estando detrás de esos niveles, tanto discos reales como modelos pueden usarse para esos estudios. Una de las técnicas más comunes para evaluar el rendimiento del subsistema de E/S de un ordenador se ha encontrado en los modelos de simulación detallada, los cuales modelan características específicas de los dispositivos de almacenamiento como la geometría del disco, la división en zonas, el almacenamiento en caché, el comportamiento de los buffers de lectura anticipada y la reordenación de solicitudes. Sin embargo, cuando se agregan innovaciones tecnológicas, los modelos tienen que ser revisados a fin de incluir nuevas características que incorporen dichas innovaciones, y esto hace difícil el tener modelos generales actualizados. Nuestra alternativa es el modelado de un dispositivo de almacenamiento como un modelo probabilístico de caja negra, donde el dispositivo de almacenamiento en sí, su interfaz y sus mecanismos de interconexión se tratan como un proceso estocástico, definiendo el tiempo de servicio como una variable aleatoria con una distribución desconocida. Este enfoque permite la generación de los tiempos de servicio del disco, de forma que se necesite menos potencia de cálculo a través del uso de un generador de variable aleatoria incluido en un simulador. De este modo, se permite alcanzar una mayor escalabilidad en la evaluación del rendimiento del subsistema de E/S a través de la simulación. En los últimos años, el ahorro de energía en los sistemas de computación se ha convertido en una necesidad importante. En ordenadores portátiles, la duración de la batería se limita a una cierta cantidad de tiempo, y no desperdiciar energía en ciertas partes haría más largo el uso del ordenador. Aquí, de nuevo el subsistema de E/S se señala como campo de estudio, ya que las unidades de disco, que son una parte principal del mismo, son uno de los elementos de más consumo de energía debido a su naturaleza mecánica. En los equipos de servidor o de empresa, donde el número de discos aumenta considerablemente, el ahorro de energía puede reducir las necesidades de refrigeración para la disipación de calor y por lo tanto, grandes costes monetarios. Esta tesis también considera la cuestión del ahorro energético en la unidad de disco, haciendo uso de diversos dispositivos en sistemas de almacenamiento híbridos, que emplean discos de estado sólido (SSD) y unidades de disco. Las SSD y unidades de disco ofrecen diferentes características de potencia, consumiendo las SSDs menos energía que las unidades de disco. En esta tesis se proponen varias técnicas que utilizan los SSD como apoyo a los dispositivos de disco. Se examinan las diversas opciones para la gestión de las SSD y los dispositivos de disco en tales sistemas híbridos, y se muestra que los métodos propuestos ahorran energía y costes monetarios en diversos escenarios. Se ha implementado un simulador compuesto por discos y dispositivos SSD. Esta tesis estudia el diseño y evaluación de los enfoques propuestos con la ayuda de las cargas de trabajo reales

    New simulation techniques for energy aware cloud computing environments

    Get PDF
    In this thesis we propose a new simulation platform specifically designed for modelling cloud computing environments, its underlying architectures, and the energy consumed by hardware devices. The models that consists on servers are divided into the five basic subsystems: processing system, memory system, network system, storage system, and the power supply unit. Each one of these subsystems has been built including new strategies to simulate energy aware. On the top of these models, there have been deployed the virtualization models to simulate the hypervisor and its scheduling policies. In addition, the cloud manager, the core of the simulation platform, is responsible for the provisioning resources management policies. It design offers to researchers APIs, allowing to perform studies on scheduling policies of cloud computing systems. This simulation platform is aimed to model existent and new designs of cloud computing architectures, with a customizable environment to configure the energy consumption of different components. The main characteristics of this platform are flexibility, allowing a wide possibility of designs; scalability to study large environments; and to provide a good compromise between accuracy and performance. A validation process of the simulation platform has been reached by comparing results from real experiments, with results from simulation executions obtained by modelling the real experiments. Therefore, to evaluate the possibility to foresee the energy consumption of a real cloud environment, an experiment of deploying a model of a real application has been studied. Finally, scalability experiments has been performed to study the behaviour of the simulation platform with large scale environments experiments. The main aim of scalability tests, is to calculate both, the amount of time and memory needed to execute large simulations, depending on the size of the environment simulated, and the availability of hardware resources to execute them.En esta tesis se propone una nueva plataforma de simulación específicamente diseñada para modelar entornos de computación en la nube, sus arquitecturas subyacentes, y la energía consumida por los dispositivos hardware. Los modelos que constituyen los servidores se encuentran divididos en los cinco subsistemas básicos: sistema de procesamiento, sistema de memoria, sistema de almacenamiento, sistema de red, y fuente de alimentación. Cada uno de estos subsistemas ha sido modelado incluyendo nuevas estrategias para simular su consumo energético. Sobre estos modelos se despliegan los modelos de virtualización con la finalidad de simular el hipervisor y sus políticas de planificación. Además, se ha realizado el modelo del gestor de la nube, la pieza central de la plataforma de simulación y responsable de la gestión de las políticas de aprovisionamiento de recursos. Su diseño ofrece interfaces a los investigadores, permitiendo realizar sus estudios sobre políticas de planificación en entornos de computación en la nube. Los objetivos de esta plataforma de simulación son permitir el modelado de entornos existentes y nuevos diseños arquitectónicos de computación en la nube, con un entorno configurable que permita modificar valores de consumo energético de los distintos componentes. Las principales características de esta plataforma son su flexibilidad, permitiendo una amplia posibilidad de diseños; escalabilidad, para estudiar entornos con gran número de elementos; y proveer un buen compromiso entre la precisión de los resultados y su rendimiento. Se ha realizado el proceso de validación de la plataforma de simulación mediante la comparación de resultados de experimentos realizados en entornos reales, con los resultados de simulación obtenidos de modelar dichos entornos reales. Tras ello, se ha realizado una evaluación mostrando la capacidad de prever el consumo energético de un entorno de computación en la nube que modela una aplicación real. Finalmente, se han realizado experimentos para analizar la escalabilidad, con el fin de estudiar el comportamiento de la plataforma ante la simulación de entornos de gran escala. El principal objetivo de los test de escalabilidad consiste en calcular la cantidad de tiempo y de memoria necesarios para ejecutar simulaciones grandes, dependiendo del tamaño del entorno simulado, y de la disponibilidad de recursos físicos para ejecutarlas.This work has been partially funded under the grant TIN2013-41350-P of the Spanish Ministry of Economics and Competitiveness, the COST Action IC1305,”Network on Sustainable Ultrascale Computing (NESUS)”, ESTuDIo (TIN2012-36812-C02-01), SICOMORo-CM (S2013/ICE-3006), the SEPE (Servicio Público de Empleo Estatal) commonly known as INEM, my entire savings, and part from my parents.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Félix García Carballeira.- Secretario: Jorge Enrique Pérez Martínez.- Vocal: Manuel Núñez Garcí

    DESIGN METHODOLOGIES FOR RELIABLE AND ENERGY-EFFICIENT MULTIPROCESSOR SYSTEM

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Power-aware routing in multi-hop wireless networks

    Get PDF
    corecore