53 research outputs found

    Highly scalable algorithms for scheduling tasks and provisioning machines on heterogeneous computing systems

    Get PDF
    Includes bibliographical references.2015 Summer.As high performance computing systems increase in size, new and more efficient algorithms are needed to schedule work on the machines, understand the performance trade-offs inherent in the system, and determine which machines to provision. The extreme scale of these newer systems requires unique task scheduling algorithms that are capable of handling millions of tasks and thousands of machines. A highly scalable scheduling algorithm is developed that computes high quality schedules, especially for large problem sizes. Large-scale computing systems also consume vast amounts of electricity, leading to high operating costs. Through the use of novel resource allocation techniques, system administrators can examine this trade-off space to quantify how much a given performance level will cost in electricity, or see what kind of performance can be expected when given an energy budget. Trading-off energy and makespan is often difficult for companies because it is unclear how each affects the profit. A monetary-based model of high performance computing is presented and a highly scalable algorithm is developed to quickly find the schedule that maximizes the profit per unit time. As more high performance computing needs are being met with cloud computing, algorithms are needed to determine the types of machines that are best suited to a particular workload. An algorithm is designed to find the best set of computing resources to allocate to the workload that takes into account the uncertainty in the task arrival rates, task execution times, and power consumption. Reward rate, cost, failure rate, and power consumption can be optimized, as desired, to optimally trade-off these conflicting objectives

    Optimal demand-supply energy management in smart grids

    Get PDF
    Everything goes down if you do not have power: the financial sector, refineries and water. The grid underlies the rest of the country’s critical infrastructure. This thesis focuses on four specific problems to balance demand-supply gap with higher reliability, efficiency and economical operation of the modern power grid. The first part investigates the economic dispatch problem with uncertain power sources. The classic economic dispatch problems seek thermal power generation to meet the demand most efficiently. However, this project exploits two different power sources such as wind and solar power generation into the standard optimal power flow framework. The stochastic nature of renewable energy sources (RES) is modeled using Weibull and Lognormal probability density functions. The system-wide economic aspect is examined with additional cost functions such as penalty and reserve costs for under and overestimating the imbalance of RES power outputs. Also, a carbon tax is imposed on carbon emissions as a separate objective function to enhance the contribution of green energy. The calculation of best power dispatch is proposed using a cost function. The second part investigates demand-side management (DSM) strategies to minimize energy wastage by changing the time pattern and magnitude of utility load at the consumer side. The main objective of DSM is to flatten the demand curve by encouraging end-users to shift energy consumption to off-peak hours or to consume less power during peak times. It is more appropriate to follow the generation pattern in many cases instead of flattening the demand curve. It becomes more challenging when the future grid accommodates the penetration of distributed energy resources in a greater manner. In both scenarios, there is an ultimate need to control energy consumption. Effective DSM strategies would help to get an accurate balance between both ends, i.e., the supply-side and demand-side, effectively reducing power demand peaks and more efficient operation of the whole system. The gap between power demand and supply can be balanced if power peak loads are minimized. The third part of the thesis then focuses on modeling the consumption behavior of end-users. For this purpose, a novel artificial intelligence and machine learning-based forecasting model is developed to analyze big data in the smart grid. Three modules namely feature selection, feature extraction and classification are proposed to solve big data problems such as feature redundancy and high dimensionality to generate quality data for classifier training and better prediction results. The last part of this thesis investigates the problem of electricity theft to minimize non technical losses and power disruptions in the power grid. Electricity theft with its many facets usually has an enormous cost to utilities compared to non-payment because of energy wastage and power quality problems. With the recognition of the internet of things (IoT) technologies and data-driven approaches, power utilities have enough tools to combat electricity theft and fraud. An integrated framework is proposed that combines three distinct modules such as data preprocessing, data class balancing and final classification to make accurate electrical consumption theft predictions in smart grids. The result of our solution to balance the electricity demand-supply gap can provide helpful information to grid planners seeking to improve the resilience of the power grid to outages and disturbances. All parts of this thesis include extensive experimental results on case studies, including realistic large-scale instances

    Contributions to energy-aware demand-response systems using SDN and NFV for fog computing

    Get PDF
    Ever-increasing energy consumption, the depletion of non-renewable resources, the climate impact associated with energy generation, and finite energy-production capacity are important concerns worldwide that drive the urgent creation of new energy management and consumption schemes. In this regard, by leveraging the massive connectivity provided by emerging communications such as the 5G systems, this thesis proposes a long-term sustainable Demand-Response solution for the adaptive and efficient management of available energy consumption for Internet of Things (IoT) infrastructures, in which energy utilization is optimized based on the available supply. In the proposed approach, energy management focuses on consumer devices (e.g., appliances such as a light bulb or a screen). In this regard, by proposing that each consumer device be part of an IoT infrastructure, it is feasible to control its respective consumption. The proposal includes an architecture that uses Network Functions Virtualization (NFV) and Software Defined Networking technologies as enablers to promote the primary use of energy from renewable sources. Associated with architecture, this thesis presents a novel consumption model conditioned on availability in which consumers are part of the management process. To efficiently use the energy from renewable and non-renewable sources, several management strategies are herein proposed, such as the prioritization of the energy supply, workload scheduling using time-shifting capabilities, and quality degradation to decrease- the power demanded by consumers if needed. The adaptive energy management solution is modeled as an Integer Linear Programming, and its complexity has been identified to be NP-Hard. To verify the improvements in energy utilization, an optimal algorithmic solution based on a brute force search has been implemented and evaluated. Because the hardness of the adaptive energy management problem and the non-polynomial growth of its optimal solution, which is limited to energy management for a small number of energy demands (e.g., 10 energy demands) and small values of management mechanisms, several faster suboptimal algorithmic strategies have been proposed and implemented. In this context, at the first stage, we implemented three heuristic strategies: a greedy strategy (GreedyTs), a genetic-algorithm-based solution (GATs), and a dynamic programming approach (DPTs). Then, we incorporated into both the optimal and heuristic strategies a prepartitioning method in which the total set of analyzed services is divided into subsets of smaller size and complexity that are solved iteratively. As a result of the adaptive energy management in this thesis, we present eight strategies, one timal and seven heuristic, that when deployed in communications infrastructures such as the NFV domain, seek the best possible scheduling of demands, which lead to efficient energy utilization. The performance of the algorithmic strategies has been validated through extensive simulations in several scenarios, demonstrating improvements in energy consumption and the processing of energy demands. Additionally, the simulation results revealed that the heuristic approaches produce high-quality solutions close to the optimal while executing among two and seven orders of magnitude faster and with applicability to scenarios with thousands and hundreds of thousands of energy demands. This thesis also explores possible application scenarios of both the proposed architecture for adaptive energy management and algorithmic strategies. In this regard, we present some examples, including adaptive energy management in-home systems and 5G networks slicing, energy-aware management solutions for unmanned aerial vehicles, also known as drones, and applicability for the efficient allocation of spectrum in flex-grid optical networks. Finally, this thesis presents open research problems and discusses other application scenarios and future work.El constante aumento del consumo de energía, el agotamiento de los recursos no renovables, el impacto climático asociado con la generación de energía y la capacidad finita de producción de energía son preocupaciones importantes en todo el mundo que impulsan la creación urgente de nuevos esquemas de consumo y gestión de energía. Al aprovechar la conectividad masiva que brindan las comunicaciones emergentes como los sistemas 5G, esta tesis propone una solución de Respuesta a la Demanda sostenible a largo plazo para la gestión adaptativa y eficiente del consumo de energía disponible para las infraestructuras de Internet of Things (IoT), en el que se optimiza la utilización de la energía en función del suministro disponible. En el enfoque propuesto, la gestión de la energía se centra en los dispositivos de consumo (por ejemplo, electrodomésticos). En este sentido, al proponer que cada dispositivo de consumo sea parte de una infraestructura IoT, es factible controlar su respectivo consumo. La propuesta incluye una arquitectura que utiliza tecnologías de Network Functions Virtualization (NFV) y Software Defined Networking como habilitadores para promover el uso principal de energía de fuentes renovables. Asociada a la arquitectura, esta tesis presenta un modelo de consumo condicionado a la disponibilidad en el que los consumidores son parte del proceso de gestión. Para utilizar eficientemente la energía de fuentes renovables y no renovables, se proponen varias estrategias de gestión, como la priorización del suministro de energía, la programación de la carga de trabajo utilizando capacidades de cambio de tiempo y la degradación de la calidad para disminuir la potencia demandada. La solución de gestión de energía adaptativa se modela como un problema de programación lineal entera con complejidad NP-Hard. Para verificar las mejoras en la utilización de energía, se ha implementado y evaluado una solución algorítmica óptima basada en una búsqueda de fuerza bruta. Debido a la dureza del problema de gestión de energía adaptativa y el crecimiento no polinomial de su solución óptima, que se limita a la gestión de energía para un pequeño número de demandas de energía (por ejemplo, 10 demandas) y pequeños valores de los mecanismos de gestión, varias estrategias algorítmicas subóptimos más rápidos se han propuesto. En este contexto, en la primera etapa, implementamos tres estrategias heurísticas: una estrategia codiciosa (GreedyTs), una solución basada en algoritmos genéticos (GATs) y un enfoque de programación dinámica (DPTs). Luego, incorporamos tanto en la estrategia óptima como en la- heurística un método de prepartición en el que el conjunto total de servicios analizados se divide en subconjuntos de menor tamaño y complejidad que se resuelven iterativamente. Como resultado de la gestión adaptativa de la energía en esta tesis, presentamos ocho estrategias, una óptima y siete heurísticas, que cuando se despliegan en infraestructuras de comunicaciones como el dominio NFV, buscan la mejor programación posible de las demandas, que conduzcan a un uso eficiente de la energía. El desempeño de las estrategias algorítmicas ha sido validado a través de extensas simulaciones en varios escenarios, demostrando mejoras en el consumo de energía y el procesamiento de las demandas de energía. Los resultados de la simulación revelaron que los enfoques heurísticos producen soluciones de alta calidad cercanas a las óptimas mientras se ejecutan entre dos y siete órdenes de magnitud más rápido y con aplicabilidad a escenarios con miles y cientos de miles de demandas de energía. Esta tesis también explora posibles escenarios de aplicación tanto de la arquitectura propuesta para la gestión adaptativa de la energía como de las estrategias algorítmicas. En este sentido, presentamos algunos ejemplos, que incluyen sistemas de gestión de energía adaptativa en el hogar, en 5G networkPostprint (published version

    Biomedical applications of belief networks

    Get PDF
    Biomedicine is an area in which computers have long been expected to play a significant role. Although many of the early claims have proved unrealistic, computers are gradually becoming accepted in the biomedical, clinical and research environment. Within these application areas, expert systems appear to have met with the most resistance, especially when applied to image interpretation.In order to improve the acceptance of computerised decision support systems it is necessary to provide the information needed to make rational judgements concerning the inferences the system has made. This entails an explanation of what inferences were made, how the inferences were made and how the results of the inference are to be interpreted. Furthermore there must be a consistent approach to the combining of information from low level computational processes through to high level expert analyses.nformation from low level computational processes through to high level expert analyses. Until recently ad hoc formalisms were seen as the only tractable approach to reasoning under uncertainty. A review of some of these formalisms suggests that they are less than ideal for the purposes of decision making. Belief networks provide a tractable way of utilising probability theory as an inference formalism by combining the theoretical consistency of probability for inference and decision making, with the ability to use the knowledge of domain experts.nowledge of domain experts. The potential of belief networks in biomedical applications has already been recog¬ nised and there has been substantial research into the use of belief networks for medical diagnosis and methods for handling large, interconnected networks. In this thesis the use of belief networks is extended to include detailed image model matching to show how, in principle, feature measurement can be undertaken in a fully probabilistic way. The belief networks employed are usually cyclic and have strong influences between adjacent nodes, so new techniques for probabilistic updating based on a model of the matching process have been developed.An object-orientated inference shell called FLAPNet has been implemented and used to apply the belief network formalism to two application domains. The first application is model-based matching in fetal ultrasound images. The imaging modality and biological variation in the subject make model matching a highly uncertain process. A dynamic, deformable model, similar to active contour models, is used. A belief network combines constraints derived from local evidence in the image, with global constraints derived from trained models, to control the iterative refinement of an initial model cue.In the second application a belief network is used for the incremental aggregation of evidence occurring during the classification of objects on a cervical smear slide as part of an automated pre-screening system. A belief network provides both an explicit domain model and a mechanism for the incremental aggregation of evidence, two attributes important in pre-screening systems.Overall it is argued that belief networks combine the necessary quantitative features required of a decision support system with desirable qualitative features that will lead to improved acceptability of expert systems in the biomedical domain

    The Architecture of an Autonomic, Resource-Aware, Workstation-Based Distributed Database System

    Get PDF
    Distributed software systems that are designed to run over workstation machines within organisations are termed workstation-based. Workstation-based systems are characterised by dynamically changing sets of machines that are used primarily for other, user-centric tasks. They must be able to adapt to and utilize spare capacity when and where it is available, and ensure that the non-availability of an individual machine does not affect the availability of the system. This thesis focuses on the requirements and design of a workstation-based database system, which is motivated by an analysis of existing database architectures that are typically run over static, specially provisioned sets of machines. A typical clustered database system -- one that is run over a number of specially provisioned machines -- executes queries interactively, returning a synchronous response to applications, with its data made durable and resilient to the failure of machines. There are no existing workstation-based databases. Furthermore, other workstation-based systems do not attempt to achieve the requirements of interactivity and durability, because they are typically used to execute asynchronous batch processing jobs that tolerate data loss -- results can be re-computed. These systems use external servers to store the final results of computations rather than workstation machines. This thesis describes the design and implementation of a workstation-based database system and investigates its viability by evaluating its performance against existing clustered database systems and testing its availability during machine failures.Comment: Ph.D. Thesi

    Management and modelling of battery storage systems in microGrids and virtual power plants

    Get PDF
    In the novel smart grid configuration of power networks, Energy Storage Systems (ESSs) are emerging as one of the most effective and practical solutions to improve the stability, reliability and security of electricity power grids, especially in presence of high penetration of intermittent Renewable Energy Sources (RESs). This PhD dissertation proposes a number of approaches in order to deal with some typical issues of future active power systems, including optimal ESS sizing and modelling problems, power ows management strategies and minimisation of investment and operating costs. In particular, in the first part of the Thesis several algorithms and methodologies for the management of microgrids and Virtual Power Plants, integrating RES generators and battery ESSs, are proposed and analysed for four cases of study, aimed at highlighting the potentialities of integrating ESSs in different smart grid architectures. The management strategies here presented are specifically based on rule-based and optimal management approaches. The promising results obtained in the energy management of power systems have highlighted the importance of reliable component models in the implementation of the control strategies. In fact, the performance of the energy management approach is only as accurate as the data provided by models, batteries being the most challenging element in the presented cases of study. Therefore, in the second part of this Thesis, the issues in modelling battery technologies are addressed, particularly referring to Lithium-Iron Phosphate (LFP) and Sodium-Nickel Chloride (SNB) systems. In the first case, a simplified and unified model of lithium batteries is proposed for the accurate prediction of charging processes evolution in EV applications, based on the experimental tests on a 2.3 Ah LFP battery. Finally, a dynamic electrical modelling is presented for a high temperature Sodium-Nickel Chloride battery. The proposed modelling is developed from an extensive experimental testing and characterisation of a commercial 23.5 kWh SNB, and is validated using a measured current-voltage profile, triggering the whole battery operative range

    Adaptation-Aware Architecture Modeling and Analysis of Energy Efficiency for Software Systems

    Get PDF
    This thesis presents an approach for the design time analysis of energy efficiency for static and self-adaptive software systems. The quality characteristics of a software system, such as performance and operating costs, strongly depend upon its architecture. Software architecture is a high-level view on software artifacts that reflects essential quality characteristics of a system under design. Design decisions made on an architectural level have a decisive impact on the quality of a system. Revising architectural design decisions late into development requires significant effort. Architectural analyses allow software architects to reason about the impact of design decisions on quality, based on an architectural description of the system. An essential quality goal is the reduction of cost while maintaining other quality goals. Power consumption accounts for a significant part of the Total Cost of Ownership (TCO) of data centers. In 2010, data centers contributed 1.3% of the world-wide power consumption. However, reasoning on the energy efficiency of software systems is excluded from the systematic analysis of software architectures at design time. Energy efficiency can only be evaluated once the system is deployed and operational. One approach to reduce power consumption or cost is the introduction of self-adaptivity to a software system. Self-adaptive software systems execute adaptations to provision costly resources dependent on user load. The execution of reconfigurations can increase energy efficiency and reduce cost. If performed improperly, however, the additional resources required to execute a reconfiguration may exceed their positive effect. Existing architecture-level energy analysis approaches offer limited accuracy or only consider a limited set of system features, e.g., the used communication style. Predictive approaches from the embedded systems and Cloud Computing domain operate on an abstraction that is not suited for architectural analysis. The execution of adaptations can consume additional resources. The additional consumption can reduce performance and energy efficiency. Design time quality analyses for self-adaptive software systems ignore this transient effect of adaptations. This thesis makes the following contributions to enable the systematic consideration of energy efficiency in the architectural design of self-adaptive software systems: First, it presents a modeling language that captures power consumption characteristics on an architectural abstraction level. Second, it introduces an energy efficiency analysis approach that uses instances of our power consumption modeling language in combination with existing performance analyses for architecture models. The developed analysis supports reasoning on energy efficiency for static and self-adaptive software systems. Third, to ease the specification of power consumption characteristics, we provide a method for extracting power models for server environments. The method encompasses an automated profiling of servers based on a set of restrictions defined by the user. A model training framework extracts a set of power models specified in our modeling language from the resulting profile. The method ranks the trained power models based on their predicted accuracy. Lastly, this thesis introduces a systematic modeling and analysis approach for considering transient effects in design time quality analyses. The approach explicitly models inter-dependencies between reconfigurations, performance and power consumption. We provide a formalization of the execution semantics of the model. Additionally, we discuss how our approach can be integrated with existing quality analyses of self-adaptive software systems. We validated the accuracy, applicability, and appropriateness of our approach in a variety of case studies. The first two case studies investigated the accuracy and appropriateness of our modeling and analysis approach. The first study evaluated the impact of design decisions on the energy efficiency of a media hosting application. The energy consumption predictions achieved an absolute error lower than 5.5% across different user loads. Our approach predicted the relative impact of the design decision on energy efficiency with an error of less than 18.94%. The second case study used two variants of the Spring-based community case study system PetClinic. The case study complements the accuracy and appropriateness evaluation of our modeling and analysis approach. We were able to predict the energy consumption of both variants with an absolute error of no more than 2.38%. In contrast to the first case study, we derived all models automatically, using our power model extraction framework, as well as an extraction framework for performance models. The third case study applied our model-based prediction to evaluate the effect of different self-adaptation algorithms on energy efficiency. It involved scientific workloads executed in a virtualized environment. Our approach predicted the energy consumption with an error below 7.1%, even though we used coarse grained measurement data of low accuracy to train the input models. The fourth case study evaluated the appropriateness and accuracy of the automated model extraction method using a set of Big Data and enterprise workloads. Our method produced power models with prediction errors below 5.9%. A secondary study evaluated the accuracy of extracted power models for different Virtual Machine (VM) migration scenarios. The results of the fifth case study showed that our approach for modeling transient effects improved the prediction accuracy for a horizontally scaling application. Leveraging the improved accuracy, we were able to identify design deficiencies of the application that otherwise would have remained unnoticed
    corecore