5,124 research outputs found

    A game theoretical randomized method for large-scale systems partitioning

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper, a game theory-based partitioning algorithm for large-scale systems (LSS) is proposed. More specifically, a game over nodes is introduced in a model predictive control framework. The Shapley value of this game is used to rank the communication links of the control network based on their impact on the overall system performance. A randomized method to estimate the Shapley value of each node and also an efficient redistribution of the resulting value to the links involved are considered to relieve the combinatorial explosion issues related to LSS. Once the partitioning solution is obtained, a sensitivity analysis is proposed to give a measure of its performance. Likewise, a greedy fine tuning procedure is considered to increase the optimality of the partitioning results. The full Barcelona drinking water network (DWN) is analyzed as a real LSS case study showing the effectiveness of the proposed approach in comparison with other partitioning schemes available in the literature.Peer ReviewedPostprint (author's final draft

    Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    Full text link
    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.Comment: 33 pages. To appear in Uwe Naumann and Olaf Schenk, editors, "Combinatorial Scientific Computing," Chapman and Hall/CRC Press, 201
    • …
    corecore