149 research outputs found

    A Model Predictive Control Approach for Low-Complexity Electric Vehicle Charging Scheduling: Optimality and Scalability

    Full text link
    With the increasing adoption of plug-in electric vehicles (PEVs), it is critical to develop efficient charging coordination mechanisms that minimize the cost and impact of PEV integration to the power grid. In this paper, we consider the optimal PEV charging scheduling, where the non-causal information about future PEV arrivals is not known in advance, but its statistical information can be estimated. This leads to an "online" charging scheduling problem that is naturally formulated as a finite-horizon dynamic programming with continuous state space and action space. To avoid the prohibitively high complexity of solving such a dynamic programming problem, we provide a Model Predictive Control (MPC) based algorithm with computational complexity O(T3)O(T^3), where TT is the total number of time stages. We rigorously analyze the performance gap between the near-optimal solution of the MPC-based approach and the optimal solution for any distributions of exogenous random variables. Furthermore, our rigorous analysis shows that when the random process describing the arrival of charging demands is first-order periodic, the complexity of proposed algorithm can be reduced to O(1)O(1), which is independent of TT. Extensive simulations show that the proposed online algorithm performs very closely to the optimal online algorithm. The performance gap is smaller than 0.4%0.4\% in most cases.Comment: 13 page

    Online Coordinated Charging Decision Algorithm for Electric Vehicles without Future Information

    Full text link
    The large-scale integration of plug-in electric vehicles (PEVs) to the power grid spurs the need for efficient charging coordination mechanisms. It can be shown that the optimal charging schedule smooths out the energy consumption over time so as to minimize the total energy cost. In practice, however, it is hard to smooth out the energy consumption perfectly, because the future PEV charging demand is unknown at the moment when the charging rate of an existing PEV needs to be determined. In this paper, we propose an Online cooRdinated CHARging Decision (ORCHARD) algorithm, which minimizes the energy cost without knowing the future information. Through rigorous proof, we show that ORCHARD is strictly feasible in the sense that it guarantees to fulfill all charging demands before due time. Meanwhile, it achieves the best known competitive ratio of 2.39. To further reduce the computational complexity of the algorithm, we propose a novel reduced-complexity algorithm to replace the standard convex optimization techniques used in ORCHARD. Through extensive simulations, we show that the average performance gap between ORCHARD and the offline optimal solution, which utilizes the complete future information, is as small as 14%. By setting a proper speeding factor, the average performance gap can be further reduced to less than 6%.Comment: 12 pages, 7 figure

    Bandit Change-Point Detection for Real-Time Monitoring High-Dimensional Data Under Sampling Control

    Full text link
    In many real-world problems of real-time monitoring high-dimensional streaming data, one wants to detect an undesired event or change quickly once it occurs, but under the sampling control constraint in the sense that one might be able to only observe or use selected components data for decision-making per time step in the resource-constrained environments. In this paper, we propose to incorporate multi-armed bandit approaches into sequential change-point detection to develop an efficient bandit change-point detection algorithm. Our proposed algorithm, termed Thompson-Sampling-Shiryaev-Roberts-Pollak (TSSRP), consists of two policies per time step: the adaptive sampling policy applies the Thompson Sampling algorithm to balance between exploration for acquiring long-term knowledge and exploitation for immediate reward gain, and the statistical decision policy fuses the local Shiryaev-Roberts-Pollak statistics to determine whether to raise a global alarm by sum shrinkage techniques. Extensive numerical simulations and case studies demonstrate the statistical and computational efficiency of our proposed TSSRP algorithm

    Profit-Maximizing Planning and Control of Battery Energy Storage Systems for Primary Frequency Control

    Full text link
    We consider a two-level profit-maximizing strategy, including planning and control, for battery energy storage system (BESS) owners that participate in the primary frequency control (PFC) market. Specifically, the optimal BESS control minimizes the operating cost by keeping the state of charge (SoC) in an optimal range. Through rigorous analysis, we prove that the optimal BESS control is a "state-invariant" strategy in the sense that the optimal SoC range does not vary with the state of the system. As such, the optimal control strategy can be computed offline once and for all with very low complexity. Regarding the BESS planning, we prove that the the minimum operating cost is a decreasing convex function of the BESS energy capacity. This leads to the optimal BESS sizing that strikes a balance between the capital investment and operating cost. Our work here provides a useful theoretical framework for understanding the planning and control strategies that maximize the economic benefits of BESSs in ancillary service markets.Comment: 12 page

    Differentially Private Change-Point Detection

    Full text link
    The change-point detection problem seeks to identify distributional changes at an unknown change-point k* in a stream of data. This problem appears in many important practical settings involving personal data, including biosurveillance, fault detection, finance, signal detection, and security systems. The field of differential privacy offers data analysis tools that provide powerful worst-case privacy guarantees. We study the statistical problem of change-point detection through the lens of differential privacy. We give private algorithms for both online and offline change-point detection, analyze these algorithms theoretically, and provide empirical validation of our results

    PAPRIKA: Private Online False Discovery Rate Control

    Full text link
    In hypothesis testing, a false discovery occurs when a hypothesis is incorrectly rejected due to noise in the sample. When adaptively testing multiple hypotheses, the probability of a false discovery increases as more tests are performed. Thus the problem of False Discovery Rate (FDR) control is to find a procedure for testing multiple hypotheses that accounts for this effect in determining the set of hypotheses to reject. The goal is to minimize the number (or fraction) of false discoveries, while maintaining a high true positive rate (i.e., correct discoveries). In this work, we study False Discovery Rate (FDR) control in multiple hypothesis testing under the constraint of differential privacy for the sample. Unlike previous work in this direction, we focus on the online setting, meaning that a decision about each hypothesis must be made immediately after the test is performed, rather than waiting for the output of all tests as in the offline setting. We provide new private algorithms based on state-of-the-art results in non-private online FDR control. Our algorithms have strong provable guarantees for privacy and statistical performance as measured by FDR and power. We also provide experimental results to demonstrate the efficacy of our algorithms in a variety of data environments

    The Cost of Regulatory Inaction: Evidence from IFRS Non-adoption

    Get PDF
    Numerous countries adopted IFRS in 2005 for a more detailed and comparable financial reporting regime. But many others did not. We study the consequences of regulatory inaction by non-adopting countries. We first show that IFRS adoption by other countries does not affect the liquidity of S&P 1500 US firms. Using S&P 1500 US firms as the control group, we find that the liquidity of firms in non-US countries that did not adopt IFRS significantly declined after the fourth quarter of 2005, suggesting a deteriorating information environment. To search for the forces behind the liquidity drop, we further show that analysts and institutional investors migrated away from non-adopting countries to adopting countries after 2005. Overall, our findings suggest that regulatory inaction can be costly – valuable information production resources can shift attention away to cover companies in the new regime, resulting in a worse information environment for companies that stay in the old regime

    A Novel Low Power UWB Cascode SiGe BiCMOS LNA with Current Reuse and Zero-Pole Cancellation

    Full text link
    A low power cascode SiGe BiCMOS low noise amplifier (LNA) with current reuse and zero-pole cancellation is presented for ultra-wideband (UWB) application. The LNA is composed of cascode input stage and common emitter (CE) output stage with dual loop feedbacks. The novel cascode-CE current reuse topology replaces the traditional two stages topology so as to obtain low power consumption. The emitter degenerative inductor in input stage is adopted to achieve good input impedance matching and noise performance. The two poles are introduced by the emitter inductor, which will degrade the gain performance, are cancelled by the dual loop feedbacks of the resistance-inductor (RL) shunt-shunt feedback and resistance-capacitor (RC) series-series feedback in the output stage. Meanwhile, output impedance matching is also achieved. Based on TSMC 0.35{\mu}m SiGe BiCMOS process, the topology and chip layout of the proposed LNA are designed and post-simulated. The LNA achieves the noise figure of 2.3~4.1dB, gain of 18.9~20.2dB, gain flatness of \pm0.65dB, input third order intercept point (IIP3) of -7dBm at 6GHz, exhibits less than 16ps of group delay variation, good input and output impedances matching, and unconditionally stable over the whole band. The power consuming is only 18mW.Comment: 7 pages, 13 figure

    Attribute Privacy: Framework and Mechanisms

    Full text link
    Ensuring the privacy of training data is a growing concern since many machine learning models are trained on confidential and potentially sensitive data. Much attention has been devoted to methods for protecting individual privacy during analyses of large datasets. However in many settings, global properties of the dataset may also be sensitive (e.g., mortality rate in a hospital rather than presence of a particular patient in the dataset). In this work, we depart from individual privacy to initiate the study of attribute privacy, where a data owner is concerned about revealing sensitive properties of a whole dataset during analysis. We propose definitions to capture \emph{attribute privacy} in two relevant cases where global attributes may need to be protected: (1) properties of a specific dataset and (2) parameters of the underlying distribution from which dataset is sampled. We also provide two efficient mechanisms and one inefficient mechanism that satisfy attribute privacy for these settings. We base our results on a novel use of the Pufferfish framework to account for correlations across attributes in the data, thus addressing "the challenging problem of developing Pufferfish instantiations and algorithms for general aggregate secrets" that was left open by \cite{kifer2014pufferfish}

    Leakage of Dataset Properties in Multi-Party Machine Learning

    Full text link
    Secure multi-party machine learning allows several parties to build a model on their pooled data to increase utility while not explicitly sharing data with each other. We show that such multi-party computation can cause leakage of global dataset properties between the parties even when parties obtain only black-box access to the final model. In particular, a ``curious'' party can infer the distribution of sensitive attributes in other parties' data with high accuracy. This raises concerns regarding the confidentiality of properties pertaining to the whole dataset as opposed to individual data records. We show that our attack can leak population-level properties in datasets of different types, including tabular, text, and graph data. To understand and measure the source of leakage, we consider several models of correlation between a sensitive attribute and the rest of the data. Using multiple machine learning models, we show that leakage occurs even if the sensitive attribute is not included in the training data and has a low correlation with other attributes or the target variable.Comment: Published in USENIX Security Symposium, 202
    corecore