34 research outputs found

    Gerechte Zuordnungen: Kollektive Entscheidungsprobleme aus der Perspektive von Mathematik und theoretischer Informatik

    Get PDF
    Wir untersuchen verschiedene Fragestellungen der Sozialwahltheorie aus Sicht der Computational Social Choice. Für ein Problem, das in Bezug zu einem Kollektiv von Agenten steht (z.B. Aufteilungen von Ressourcen oder Repräsentantenwahlen), stehen verschiedene Alternativen als Lösung zur Verfügung; ein wesentlicher Aspekt sind dabei die diversen Pr\"aferenzen der Agenten gegenüber den Alternativen. Die Qualität der Lösungen wird anhand von Kriterien aus den Sozialwissenschaften (Fairness), der Spieltheorie (Stabilität) und den Wirtschaftswissenschaften (Effizienz) charakterisiert. In Computational Social Choice werden solche Fragestellungen mit Werkzeugen der Mathematik (z.B. Logik und Kombinatorik) und Informatik (z.B. Komplexitätstheorie und Algorithmik) behandelt. Als roter Faden zieht sich die Frage nach sogenannten "`gerechten Zuordnungen"' durch die Dissertation. Für die Zuordnung von Gütern zu Agenten zeigen wir, wie mithilfe eines dezentralisierten Ansatzes Zuordnungen gefunden werden können, die Ungleichheit minimieren. Wir analysieren das Verhalten dieses Ansatzes für Worst-Case-Instanzen und benutzen dabei eine innovative Beweismethode, die auf impliziten rekursiven Konstruktionen unter Verwendung von Argumenten der Infinitesimalrechnung beruht. Bei der Zuordnung von Agenten zu Aktivitäten betrachten wir das vereinfachte Szenario, in dem die Agenten Präferenzen bezüglich der Aktivitäten haben und die Menge der zulässigen Zuordnungen Beschränkungen bezüglich der Teilnehmerzahlen pro Aktivität unterliegt. Wir führen verschiedene Lösungskonzepte ein und erläutern die Zusammenhänge und Unterschiede dieser Konzepte. Die zugehörigen Entscheidungsprobleme zur Existenz und Maximalität entsprechender Zuordnungen unterziehen wir einer ausführlichen Komplexitätsanalyse. Zuordnungsprobleme können auch als Auktionen aufgefasst werden. Wir betrachten ein Szenario, in dem die Agenten Gebote auf Transformationen von Gütermengen abgeben. In unserem Modell sind diese durch die Existenz von Gütern charakterisiert, die durch die Transformationen nicht verbraucht werden. Von Interesse sind die Kombinationen von Transformationen, die den Gesamtnutzen maximieren. Wir legen eine (parametrisierte) Komplexitätsanalyse dieses Modells vor. Etwas abseits der Grundfragestellung liegen unsere Untersuchungen zu kombinierten Wettkämpfen. Diese interpretieren wir als Wahlproblem, d.h. als Aggregation von Ordnungen. Wir untersuchen die Anfälligkeit für Manipulationen durch die Athleten.We investigate questions from social choice theory from the viewpoint of computational social choice. We consider the setting that a group of agents faces a collective decision problem (e.g., resource allocation or the choice of a representative): they have to choose among various alternatives. A crucial aspect are the agents' individual preferences over these alternatives. The quality of the solutions is measured by various criteria from the fields of social sciences (fairness), game theory (stability) and economics (efficiency). In computational social choice, such problems are analyzed and accessed via methods of mathematics (e.g., logic and combinatoric) and theoretical computer science (e.g. complexity theory and algorithms). The question of so called `fair assignments' runs like a common thread through most parts of this dissertation. Regarding allocations of goods to agents, we show how to achieve allocations with minimal inequality by means of a distributed approach. We analyze the behavior of this approach for worst case instances; therefor we use an innovative proof technique which relies on implicit recursive constructions and insights from basic calculus. For assignments of agents to activities, we consider a simplified scenario where the agents express preferences over activities and the set of feasible assignments is restricted by the number of agents which can participate in a (specific) activity. We introduce several solution concepts and elucidate the connections and differences between these concepts. Furthermore, we provide an elaborated complexity analysis of the associated decision problems addressing existence and maximality of the corresponding solution concepts. Assignment problems can also be seen as auctions. We consider a scenario where the agents bid on transformations of goods. In our model, each transformation requires the existence of a `tool good' which is not consumed by the transformation. We are interested in combinations of transformations which maximize the total utility. We study the computational complexity of this model in great detail, using methods from both classical and parameterized complexity theory. Slightly off topic are our investigations on combined competitions. We interpret these as a voting problem, i.e., as the aggregation of orders. We investigate the susceptibility of these competitions to manipulation by the athletes

    Mechanism Design and Analysis Using Simulation-Based Game Models.

    Full text link
    As agent technology matures, it becomes easier to envision electronic marketplaces teeming with autonomous agents. Since agents are explicitly programmed to (nearly) optimally compete in these marketplaces, and markets themselves are designed with specific objectives in mind, tools are necessary for systematic analyses of strategic interactions among autonomous agents. While traditional game-theoretic approaches to the analysis of multi-agent systems can provide much insight, they are often inadequate, as they rely heavily on analytic tractability of the problem at hand; however, even mildly realistic models of electronic marketplaces contain enough complexity to render a fully analytic approach hopeless. To address questions not amenable to traditional theoretical approaches, I develop methods that allow systematic computational analysis of game-theoretic models in which the players' payoff functions are represented using simulations (i.e., simulation-based games). I develop a globally convergent algorithm for Nash equilibrium approximation in infinite simulation-based games, which I instantiate in the context of infinite games of incomplete information. Additionally, I use statistical learning techniques to improve the quality of Nash equilibrium approximation based on data collected from a game simulator. I also derive probabilistic confidence bounds and present convergence results about solutions of finite games modeled using simulations. The former allow an analyst to make statistically-founded statements about results based on game-theoretic simulations, while the latter provide formal justification for approximating game-theoretic solutions using simulation experiments. To address the broader mechanism design problem, I introduce an iterative algorithm for search in the design space, which requires a game solver as a subroutine. As a result, I enable computational mechanism design using simulation-based models of games by availing the designer of a set of solution tools geared specifically towards games modeled using simulations. I apply the developed computational techniques to analyze strategic procurement and answer design questions in a supply-chain simulation, as well as to analyze dynamic bidding strategies in sponsored search auctions. Indeed, the techniques I develop have broad potential applicability beyond electronic marketplaces: they are geared towards any system that features competing strategic players who respond to incentives in a way that can be reasonably predicted via a game-theoretic analysis.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60786/1/yvorobey_1.pd

    Learning with Structured Sparsity: From Discrete to Convex and Back.

    Get PDF
    In modern-data analysis applications, the abundance of data makes extracting meaningful information from it challenging, in terms of computation, storage, and interpretability. In this setting, exploiting sparsity in data has been essential to the development of scalable methods to problems in machine learning, statistics and signal processing. However, in various applications, the input variables exhibit structure beyond simple sparsity. This motivated the introduction of structured sparsity models, which capture such sophisticated structures, leading to a significant performance gains and better interpretability. Structured sparse approaches have been successfully applied in a variety of domains including computer vision, text processing, medical imaging, and bioinformatics. The goal of this thesis is to improve on these methods and expand their success to a wider range of applications. We thus develop novel methods to incorporate general structure a priori in learning problems, which balance computational and statistical efficiency trade-offs. To achieve this, our results bring together tools from the rich areas of discrete and convex optimization. Applying structured sparsity approaches in general is challenging because structures encountered in practice are naturally combinatorial. An effective approach to circumvent this computational challenge is to employ continuous convex relaxations. We thus start by introducing a new class of structured sparsity models, able to capture a large range of structures, which admit tight convex relaxations amenable to efficient optimization. We then present an in-depth study of the geometric and statistical properties of convex relaxations of general combinatorial structures. In particular, we characterize which structure is lost by imposing convexity and which is preserved. We then focus on the optimization of the convex composite problems that result from the convex relaxations of structured sparsity models. We develop efficient algorithmic tools to solve these problems in a non-Euclidean setting, leading to faster convergence in some cases. Finally, to handle structures that do not admit meaningful convex relaxations, we propose to use, as a heuristic, a non-convex proximal gradient method, efficient for several classes of structured sparsity models. We further extend this method to address a probabilistic structured sparsity model, we introduce to model approximately sparse signals

    Proceedings of the 8th Cologne-Twente Workshop on Graphs and Combinatorial Optimization

    No full text
    International audienceThe Cologne-Twente Workshop (CTW) on Graphs and Combinatorial Optimization started off as a series of workshops organized bi-annually by either Köln University or Twente University. As its importance grew over time, it re-centered its geographical focus by including northern Italy (CTW04 in Menaggio, on the lake Como and CTW08 in Gargnano, on the Garda lake). This year, CTW (in its eighth edition) will be staged in France for the first time: more precisely in the heart of Paris, at the Conservatoire National d’Arts et Métiers (CNAM), between 2nd and 4th June 2009, by a mixed organizing committee with members from LIX, Ecole Polytechnique and CEDRIC, CNAM

    Negotiated resource brokering for quality of service provision of grid applications

    Get PDF
    Grid Computing is a distributed computing paradigm where many computers often formed from different organisations work together so that their computing power may be aggregated. Grids are often heterogeneous and resources vary significantly in CPU power, available RAM, disk space, OS, architecture and installed software etc. Added to this lack of uniformity is that best effort services are usually offered, as opposed to services that offer guarantees upon completion time via the use of Service Level Agreements (SLAs). The lack of guarantees means the uptake of Grids is stifled. The challenge tackled here is to add such guarantees, thus ensuring users are more willing to use the Grid given an obvious reluctance to pay or contribute, if the quality of the services returned lacks any guarantees. Grids resources are also finite in nature, hence priorities need establishing in order to best meet any guarantees placed upon the limited resources available. An economic approach is hence adopted to ensure end users reveal their true priorities for jobs, whilst also adding incentive for provisioning services, via a service charge. An economically oriented model is therefore proposed that provides SLAs with bicriteria constraints upon time and cost. This model is tested via discrete event simulation and a simulator is presented that is capable of testing the model. An architecture is then established that was developed to utilise the economic model for negotiating SLAs. Finally experimentation is reported upon from the use of the software developed when it was deployed upon a testbed, including admission control and steering of jobs within the Grid. Results are presented that show the interactions and relationship between the time and cost constraints within the model, including transitions between the dominance of one constraint over the other and other things such as the effects of rescheduling upon the market

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    Large-scale unit commitment under uncertainty: an updated literature survey

    Get PDF
    The Unit Commitment problem in energy management aims at finding the optimal production schedule of a set of generation units, while meeting various system-wide constraints. It has always been a large-scale, non-convex, difficult problem, especially in view of the fact that, due to operational requirements, it has to be solved in an unreasonably small time for its size. Recently, growing renewable energy shares have strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex and uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focussing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, while providing entry points to the relevant literature on optimization under uncertainty. This is an updated version of the paper "Large-scale Unit Commitment under uncertainty: a literature survey" that appeared in 4OR 13(2), 115--171 (2015); this version has over 170 more citations, most of which appeared in the last three years, proving how fast the literature on uncertain Unit Commitment evolves, and therefore the interest in this subject
    corecore