613 research outputs found

    SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions

    Full text link
    Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of marketbased provisioning policies and virtualisation technologies for flexible allocation of resources to applications. The performance results obtained from our working prototype system shows the feasibility and effectiveness of SLA-based resource provisioning in Clouds.Comment: 10 pages, 7 figures, Conference Keynote Paper: 2011 IEEE International Conference on Cloud and Service Computing (CSC 2011, IEEE Press, USA), Hong Kong, China, December 12-14, 201

    Real-Time Active-Reactive Optimal Power Flow with Flexible Operation of Battery Storage Systems

    Get PDF
    In this paper, a multi-phase multi-time-scale real-time dynamic active-reactive optimal power flow (RT-DAR-OPF) framework is developed to optimally deal with spontaneous changes in wind power in distribution networks (DNs) with battery storage systems (BSSs). The most challenging issue hereby is that a large-scale ‘dynamic’ (i.e., with differential/difference equations rather than only algebraic equations) mixed-integer nonlinear programming (MINLP) problem has to be solved in real time. Moreover, considering the active-reactive power capabilities of BSSs with flexible operation strategies, as well as minimizing the expended life costs of BSSs further increases the complexity of the problem. To solve this problem, in the first phase, we implement simultaneous optimization of a huge number of mixed-integer decision variables to compute optimal operations of BSSs on a day-to-day basis. In the second phase, based on the forecasted wind power values for short prediction horizons, wind power scenarios are generated to describe uncertain wind power with non-Gaussian distribution. Then, MINLP AR-OPF problems corresponding to the scenarios are solved and reconciled in advance of each prediction horizon. In the third phase, based on the measured actual values of wind power, one of the solutions is selected, modified, and realized to the network for very short intervals. The applicability of the proposed RT-DAR-OPF is demonstrated using a medium-voltage DN

    Probabilistic Distance for Mixtures of Independent Component Analyzers

    Full text link
    © 2018 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] Independent component analysis (ICA) is a blind source separation technique where data are modeled as linear combinations of several independent non-Gaussian sources. The independence and linear restrictions are relaxed using several ICA mixture models (ICAMM) obtaining a two-layer artificial neural network structure. This allows for dependence between sources of different classes, and thus a myriad of multidimensional probability density functions (PDFs) can be accurate modeled. This paper proposes a new probabilistic distance (PDI) between the parameters learned for two ICA mixture models. The PDI is computed explicitly, unlike the popular Kullback-Leibler divergence (KLD) and other similar metrics, removing the need for numerical integration. Furthermore, the PDI is symmetric and bounded within 0 and 1, which enables its use as a posterior probability in fusion approaches. In this work, the PDI is employed for change detection by measuring the distance between two ICA mixture models learned in consecutive time windows. The changes might be associated with relevant states from a process under analysis that are explicitly reflected in the learned ICAMM parameters. The proposed distance was tested in two challenging applications using simulated and real data: (i) detecting flaws in materials using ultrasounds and (ii) detecting changes in electroencephalography signals from humans performing neuropsychological tests. The results demonstrate that the PDI outperforms the KLD in change-detection capabilitiesThis work was supported by the Spanish Administration and European Union under grant TEC2014-58438-R, and Generalitat Valenciana under Grant PROMETEO II/2014/032 and Grant GV/2014/034.Safont Armero, G.; Salazar Afanador, A.; Vergara Domínguez, L.; Gomez, E.; Villanueva, V. (2018). Probabilistic Distance for Mixtures of Independent Component Analyzers. IEEE Transactions on Neural Networks and Learning Systems. 29(4):1161-1173. https://doi.org/10.1109/TNNLS.2017.2663843S1161117329

    Safe Reinforcement Learning

    Get PDF
    This dissertation proposes and presents solutions to two new problems that fall within the broad scope of reinforcement learning (RL) research. The first problem, high confidence off-policy evaluation (HCOPE), requires an algorithm to use historical data from one or more behavior policies to compute a high confidence lower bound on the performance of an evaluation policy. This allows us to, for the first time, provide the user of any RL algorithm with confidence that a newly proposed policy (which has never actually been used) will perform well. The second problem is to construct what we call a safe reinforcement learning algorithm---an algorithm that searches for new and improved policies, while ensuring that the probability that a bad policy is proposed is low. Importantly, the user of the RL algorithm may tune the meaning of bad (in terms of a desired performance baseline) and how low the probability of a bad policy being deployed should be, in order to capture the level of risk that is acceptable for the application at hand. We show empirically that our solutions to these two critical problems require surprisingly little data, making them practical for real problems. While our methods allow us to, for the first time, produce convincing statistical guarantees about the performance of a policy without requiring its execution, the primary contribution of this dissertation is not the methods that we propose. The primary contribution of this dissertation is a compelling argument that these two problems, HCOPE and safe reinforcement learning, which at first may seem out of reach, are actually tractable. We hope that this will inspire researchers to propose their own methods, which improve upon our own, and that the development of increasingly data-efficient safe reinforcement learning algorithms will catalyze the widespread adoption of reinforcement learning algorithms for suitable real-world problems

    Measurement-based network clustering for active distribution systems

    Get PDF
    ©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper presents a network clustering (NC) method for active distribution networks (ADNs). Following the outage of a section of an ADN, the method identifies and forms an optimum cluster of microgrids within the section. The optimum cluster is determined from a set of candidate microgrid clusters by estimating the following metrics: total power loss, voltage deviations, and minimum load shedding. To compute these metrics, equivalent circuits of the clusters are estimated using measured data provided by phasor measurement units (PMUs). Hence, the proposed NC method determines the optimum microgrid cluster without requiring information about the network’s topology and its components. The proposed method is tested by simulating a study network in a real-time simulator coupled to physical PMUs and a prototype algorithm implementation, also executing in real time.Peer ReviewedPostprint (author's final draft
    corecore