16,876 research outputs found

    From Packet to Power Switching: Digital Direct Load Scheduling

    Full text link
    At present, the power grid has tight control over its dispatchable generation capacity but a very coarse control on the demand. Energy consumers are shielded from making price-aware decisions, which degrades the efficiency of the market. This state of affairs tends to favor fossil fuel generation over renewable sources. Because of the technological difficulties of storing electric energy, the quest for mechanisms that would make the demand for electricity controllable on a day-to-day basis is gaining prominence. The goal of this paper is to provide one such mechanisms, which we call Digital Direct Load Scheduling (DDLS). DDLS is a direct load control mechanism in which we unbundle individual requests for energy and digitize them so that they can be automatically scheduled in a cellular architecture. Specifically, rather than storing energy or interrupting the job of appliances, we choose to hold requests for energy in queues and optimize the service time of individual appliances belonging to a broad class which we refer to as "deferrable loads". The function of each neighborhood scheduler is to optimize the time at which these appliances start to function. This process is intended to shape the aggregate load profile of the neighborhood so as to optimize an objective function which incorporates the spot price of energy, and also allows distributed energy resources to supply part of the generation dynamically.Comment: Accepted by the IEEE journal of Selected Areas in Communications (JSAC): Smart Grid Communications series, to appea

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP

    A switching mechanism framework for optimal coupling of predictive scheduling and reactive control in manufacturing hybrid control architectures

    Get PDF
    Nowadays, manufacturing systems are seeking control architectures that offer efficient production performance and reactivity to disruptive events. Dynamic hybrid control architectures are a promising approach as they are not only able to switch dynamically between hierarchical, heterarchical and semi-heterarchical structures, they can also switch the level of coupling between predictive scheduling and reactive control techniques. However, few approaches address an efficient switching process in terms of structure and coupling. This paper presents a switching mechanism framework in dynamic hybrid control architectures, which exploits the advantages of hierarchical manufacturing scheduling systems and heterarchical manufacturing execution systems, and also mitigates the respective reactivity and optimality drawbacks. The main feature in this framework is that it monitors the system dynamics online and shifts between different operating modes to attain the most suitable production control strategy. The experiments were carried out in an emulation of a real manufacturing system to illustrate the benefits of including a switching mechanism in simulated scenarios. The results show that the switching mechanism improves response to disruptions in a global performance indicator as it permits to select the best alternative from several operating modes.This article was supported by COLCIENCIAS Departamento Administrativo de Ciencia, Tecnología e Innovación 10.13039/100007637 [Grant Number Convocatoria 568 Doctorados en el exterior]; Pontificia Universidad Javeriana [Grant Number Programa de Formacion de posgrados].info:eu-repo/semantics/publishedVersio

    Truth and Regret in Online Scheduling

    Full text link
    We consider a scheduling problem where a cloud service provider has multiple units of a resource available over time. Selfish clients submit jobs, each with an arrival time, deadline, length, and value. The service provider's goal is to implement a truthful online mechanism for scheduling jobs so as to maximize the social welfare of the schedule. Recent work shows that under a stochastic assumption on job arrivals, there is a single-parameter family of mechanisms that achieves near-optimal social welfare. We show that given any such family of near-optimal online mechanisms, there exists an online mechanism that in the worst case performs nearly as well as the best of the given mechanisms. Our mechanism is truthful whenever the mechanisms in the given family are truthful and prompt, and achieves optimal (within constant factors) regret. We model the problem of competing against a family of online scheduling mechanisms as one of learning from expert advice. A primary challenge is that any scheduling decisions we make affect not only the payoff at the current step, but also the resource availability and payoffs in future steps. Furthermore, switching from one algorithm (a.k.a. expert) to another in an online fashion is challenging both because it requires synchronization with the state of the latter algorithm as well as because it affects the incentive structure of the algorithms. We further show how to adapt our algorithm to a non-clairvoyant setting where job lengths are unknown until jobs are run to completion. Once again, in this setting, we obtain truthfulness along with asymptotically optimal regret (within poly-logarithmic factors)

    A Study on the Improvement of Data Collection in Data Centers and Its Analysis on Deep Learning-based Applications

    Get PDF
    Big data are usually stored in data center networks for processing and analysis through various cloud applications. Such applications are a collection of data-intensive jobs which often involve many parallel flows and are network bound in the distributed environment. The recent networking abstraction, coflow, for data parallel programming paradigm to express the communication requirements has opened new opportunities to network scheduling for such applications. Therefore, I propose coflow based network scheduling algorithm, Coflourish, to enhance the job completion time for such data-parallel applications, in the presence of the increased background traffic to mimic the cloud environment infrastructure. It outperforms Varys, the state-of-the-art coflow scheduling technique, by 75.5% under various workload conditions. However, such technique often requires customized operating systems, customized computing frameworks or external proprietary software-defined networking (SDN) switches. Consequently, in order to achieve the minimal application completion time, through coflow scheduling, coflow routing, and per-rate per-flow scheduling paradigm with minimum customization to the hosts and switches, I propose another scheduling technique, MinCOF which exploits the OpenFlow SDN. MinCOF provides faster deployability and no proprietary system requirements. It also decreases the average coflow completion time by 12.94% compared to the latest OpenFlow-based coflow scheduling and routing framework. Although the challenges related to analysis and processing of big data can be handled effectively through addressing the network issues. Sometimes, there are also challenges to analyze data effectively due to the limited data size. To further analyze such collected data, I use various deep learning approaches. Specifically, I design a framework to collect Twitter data during natural disaster events and then deploy deep learning model to detect the fake news spreading during such crisis situations. The wide-spread of fake news during disaster events disrupts the rescue missions and recovery activities, costing human lives and delayed response. My deep learning model classifies such fake events with 91.47% accuracy and F1 score of 90.89 to help the emergency managers during crisis. Therefore, this study focuses on providing network solutions to decrease the application completion time in the cloud environment, in addition to analyze the data collected using the deployed network framework to further use it to solve the real-world problems using the various deep learning approaches

    Adaptive Real-Time Scheduling for Legacy Multimedia Applications

    Get PDF
    Multimedia applications are often executed on standard Personal Computers. The absence of established standards has hindered the adoption of real-time scheduling solutions in this class of applications. Developers have adopted a wide range of heuristic approaches to achieve an acceptable timing behaviour but the result is often unreliable. We propose a mechanism to extend the benefits of real-time scheduling to legacy applications based on the combination of two techniques: 1) a real-time monitor that observes and infers the activation period of the application, and 2) a feedback mechanism that adapts the scheduling parameters to improve its real-time performance
    • …
    corecore