302 research outputs found

    LP-based Covering Games with Low Price of Anarchy

    Full text link
    We present a new class of vertex cover and set cover games. The price of anarchy bounds match the best known constant factor approximation guarantees for the centralized optimization problems for linear and also for submodular costs -- in contrast to all previously studied covering games, where the price of anarchy cannot be bounded by a constant (e.g. [6, 7, 11, 5, 2]). In particular, we describe a vertex cover game with a price of anarchy of 2. The rules of the games capture the structure of the linear programming relaxations of the underlying optimization problems, and our bounds are established by analyzing these relaxations. Furthermore, for linear costs we exhibit linear time best response dynamics that converge to these almost optimal Nash equilibria. These dynamics mimic the classical greedy approximation algorithm of Bar-Yehuda and Even [3]

    Online Distributed Sensor Selection

    Full text link
    A key problem in sensor networks is to decide which sensors to query when, in order to obtain the most useful information (e.g., for performing accurate prediction), subject to constraints (e.g., on power and bandwidth). In many applications the utility function is not known a priori, must be learned from data, and can even change over time. Furthermore for large sensor networks solving a centralized optimization problem to select sensors is not feasible, and thus we seek a fully distributed solution. In this paper, we present Distributed Online Greedy (DOG), an efficient, distributed algorithm for repeatedly selecting sensors online, only receiving feedback about the utility of the selected sensors. We prove very strong theoretical no-regret guarantees that apply whenever the (unknown) utility function satisfies a natural diminishing returns property called submodularity. Our algorithm has extremely low communication requirements, and scales well to large sensor deployments. We extend DOG to allow observation-dependent sensor selection. We empirically demonstrate the effectiveness of our algorithm on several real-world sensing tasks

    Multiagent Maximum Coverage Problems: The Trade-off Between Anarchy and Stability

    Full text link
    The price of anarchy and price of stability are three well-studied performance metrics that seek to characterize the inefficiency of equilibria in distributed systems. The distinction between these two performance metrics centers on the equilibria that they focus on: the price of anarchy characterizes the quality of the worst-performing equilibria, while the price of stability characterizes the quality of the best-performing equilibria. While much of the literature focuses on these metrics from an analysis perspective, in this work we consider these performance metrics from a design perspective. Specifically, we focus on the setting where a system operator is tasked with designing local utility functions to optimize these performance metrics in a class of games termed covering games. Our main result characterizes a fundamental trade-off between the price of anarchy and price of stability in the form of a fully explicit Pareto frontier. Within this setup, optimizing the price of anarchy comes directly at the expense of the price of stability (and vice versa). Our second results demonstrates how a system-operator could incorporate an additional piece of system-level information into the design of the agents' utility functions to breach these limitations and improve the system's performance. This valuable piece of system-level information pertains to the performance of worst performing agent in the system.Comment: 14 pages, 4 figure

    Multiwinner Voting with Fairness Constraints

    Full text link
    Multiwinner voting rules are used to select a small representative subset of candidates or items from a larger set given the preferences of voters. However, if candidates have sensitive attributes such as gender or ethnicity (when selecting a committee), or specified types such as political leaning (when selecting a subset of news items), an algorithm that chooses a subset by optimizing a multiwinner voting rule may be unbalanced in its selection -- it may under or over represent a particular gender or political orientation in the examples above. We introduce an algorithmic framework for multiwinner voting problems when there is an additional requirement that the selected subset should be "fair" with respect to a given set of attributes. Our framework provides the flexibility to (1) specify fairness with respect to multiple, non-disjoint attributes (e.g., ethnicity and gender) and (2) specify a score function. We study the computational complexity of this constrained multiwinner voting problem for monotone and submodular score functions and present several approximation algorithms and matching hardness of approximation results for various attribute group structure and types of score functions. We also present simulations that suggest that adding fairness constraints may not affect the scores significantly when compared to the unconstrained case.Comment: The conference version of this paper appears in IJCAI-ECAI 201

    Models, Theoretical Properties, and Solution Approaches for Stochastic Programming with Endogenous Uncertainty

    Get PDF
    In a typical optimization problem, uncertainty does not depend on the decisions being made in the optimization routine. But, in many application areas, decisions affect underlying uncertainty (endogenous uncertainty), either altering the probability distributions or the timing at which the uncertainty is resolved. Stochastic programming is a widely used method in optimization under uncertainty. Though plenty of research exists on stochastic programming where decisions affect the timing at which uncertainty is resolved, much less work has been done on stochastic programming where decisions alter probability distributions of uncertain parameters. Therefore, we propose methodologies for the latter category of optimization under endogenous uncertainty and demonstrate their benefits in some application areas. First, we develop a data-driven stochastic program (integrates a supervised machine learning algorithm to estimate probability distributions of uncertain parameters) for a wildfire risk reduction problem, where resource allocation decisions probabilistically affect uncertain human behavior. The nonconvex model is linearized using a reformulation approach. To solve a realistic-sized problem, we introduce a simulation program to efficiently compute the recourse objective value for a large number of scenarios. We present managerial insights derived from the results obtained based on Santa Fe National Forest data. Second, we develop a data-driven stochastic program with both endogenous and exogenous uncertainties with an application to combined infrastructure protection and network design problem. In the proposed model, some first-stage decision variables affect probability distributions, whereas others do not. We propose an exact reformulation for linearizing the nonconvex model and provide a theoretical justification of it. We designed an accelerated L-shaped decomposition algorithm to solve the linearized model. Results obtained using transportation networks created based on the southeastern U.S. provide several key insights for practitioners in using this proposed methodology. Finally, we study submodular optimization under endogenous uncertainty with an application to complex system reliability. Specifically, we prove that our stochastic program\u27s reliability maximization objective function is submodular under some probability distributions commonly used in reliability literature. Utilizing the submodularity, we implement a continuous approximation algorithm capable of solving large-scale problems. We conduct a case study demonstrating the computational efficiency of the algorithm and providing insights

    Smart Decision-Making via Edge Intelligence for Smart Cities

    Get PDF
    Smart cities are an ambitious vision for future urban environments. The ultimate aim of smart cities is to use modern technology to optimize city resources and operations while improving overall quality-of-life of its citizens. Realizing this ambitious vision will require embracing advancements in information communication technology, data analysis, and other technologies. Because smart cities naturally produce vast amounts of data, recent artificial intelligence (AI) techniques are of interest due to their ability to transform raw data into insightful knowledge to inform decisions (e.g., using live road traffic data to control traffic lights based on current traffic conditions). However, training and providing these AI applications is non-trivial and will require sufficient computing resources. Traditionally, cloud computing infrastructure have been used to process computationally intensive tasks; however, due to the time-sensitivity of many of these smart city applications, novel computing hardware/technologies are required. The recent advent of edge computing provides a promising computing infrastructure to support the needs of the smart cities of tomorrow. Edge computing pushes compute resources close to end users to provide reduced latency and improved scalability — making it a viable candidate to support smart cities. However, it comes with hardware limitations that are necessary to consider. This thesis explores the use of the edge computing paradigm for smart city applications and how to make efficient, smart decisions related to their available resources. This is done while considering the quality-of-service provided to end users. This work can be seen as four parts. First, this work touches on how to optimally place and serve AI-based applications on edge computing infrastructure to maximize quality-of-service to end users. This is cast as an optimization problem and solved with efficient algorithms that approximate the optimal solution. Second, this work investigates the applicability of compression techniques to reduce offloading costs for AI-based applications in edge computing systems. Finally, this thesis then demonstrate how edge computing can support AI-based solutions for smart city applications, namely, smart energy and smart traffic. These applications are approached using the recent paradigm of federated learning. The contributions of this thesis include the design of novel algorithms and system design strategies for placement and scheduling of AI-based services on edge computing systems, formal formulation for trade-offs between delivered AI model performance and latency, compression for offloading decisions for communication reductions, and evaluation of federated learning-based approaches for smart city applications
    • …
    corecore