29,949 research outputs found

    Catching Cheats: Detecting Strategic Manipulation in Distributed Optimisation of Electric Vehicle Aggregators

    Full text link
    Given the rapid rise of electric vehicles (EVs) worldwide, and the ambitious targets set for the near future, the management of large EV fleets must be seen as a priority. Specifically, we study a scenario where EV charging is managed through self-interested EV aggregators who compete in the day-ahead market in order to purchase the electricity needed to meet their clients' requirements. With the aim of reducing electricity costs and lowering the impact on electricity markets, a centralised bidding coordination framework has been proposed in the literature employing a coordinator. In order to improve privacy and limit the need for the coordinator, we propose a reformulation of the coordination framework as a decentralised algorithm, employing the Alternating Direction Method of Multipliers (ADMM). However, given the self-interested nature of the aggregators, they can deviate from the algorithm in order to reduce their energy costs. Hence, we study the strategic manipulation of the ADMM algorithm and, in doing so, describe and analyse different possible attack vectors and propose a mathematical framework to quantify and detect manipulation. Importantly, this detection framework is not limited the considered EV scenario and can be applied to general ADMM algorithms. Finally, we test the proposed decentralised coordination and manipulation detection algorithms in realistic scenarios using real market and driver data from Spain. Our empirical results show that the decentralised algorithm's convergence to the optimal solution can be effectively disrupted by manipulative attacks achieving convergence to a different non-optimal solution which benefits the attacker. With respect to the detection algorithm, results indicate that it achieves very high accuracies and significantly outperforms a naive benchmark

    Dynamic Factor Analysis for Measuring Money

    Get PDF
    Technological innovations in the financial industry pose major problems for the measurement of monetary aggregates. The authors describe work on a new measure of money that has a more satisfactory means of identifying and removing the effects of financial innovations. The new method distinguishes between the measured data (currency and deposit balances) and the underlying phenomena of interest (the intended use of money for transactions and savings). Although the classification scheme used for monetary aggregates was originally designed to provide a proxy for the phenomena of interest, it is breaking down. The authors feel it is beneficial to move to an explicit attempt to measure an index of intended use. The distinction is only a preliminary step. It provides a mechanism that allows for financial innovations to affect measured data without fundamentally altering the underlying phenomena being measured, but it does not automatically accommodate financial innovations. To achieve that step will require further work. At least intuitively, however, the focus on an explicit measurement model provides a better framework for identifying when financial innovations change the measured data. Although the work is preliminary, and there are many outstanding problems, if the approach proves successful it will result in the most fundamental reformulation in the way money is measured since the introduction of monetary aggregates half a century ago. The authors review previous methodologies and describe a dynamic factor approach that makes an explicit distinction between the measured data and the underlying phenomena. They present some preliminary estimates using simulated and real data.Econometric and statistical methods; Monetary aggregates; Monetary and financial indicators

    An Iterative Scheme for Leverage-based Approximate Aggregation

    Full text link
    The current data explosion poses great challenges to the approximate aggregation with an efficiency and accuracy. To address this problem, we propose a novel approach to calculate the aggregation answers with a high accuracy using only a small portion of the data. We introduce leverages to reflect individual differences in the samples from a statistical perspective. Two kinds of estimators, the leverage-based estimator, and the sketch estimator (a "rough picture" of the aggregation answer), are in constraint relations and iteratively improved according to the actual conditions until their difference is below a threshold. Due to the iteration mechanism and the leverages, our approach achieves a high accuracy. Moreover, some features, such as not requiring recording the sampled data and easy to extend to various execution modes (e.g., the online mode), make our approach well suited to deal with big data. Experiments show that our approach has an extraordinary performance, and when compared with the uniform sampling, our approach can achieve high-quality answers with only 1/3 of the same sample size.Comment: 17 pages, 9 figure
    corecore