9 research outputs found

    On Randomized Fictitious Play for Approximating Saddle Points Over Convex Sets

    No full text
    Given two bounded convex sets X\subseteq\RR^m and Y\subseteq\RR^n, specified by membership oracles, and a continuous convex-concave function F:X\times Y\to\RR, we consider the problem of computing an \eps-approximate saddle point, that is, a pair (x∗,y∗)∈X×Y(x^*,y^*)\in X\times Y such that \sup_{y\in Y} F(x^*,y)\le \inf_{x\in X}F(x,y^*)+\eps. Grigoriadis and Khachiyan (1995) gave a simple randomized variant of fictitious play for computing an \eps-approximate saddle point for matrix games, that is, when FF is bilinear and the sets XX and YY are simplices. In this paper, we extend their method to the general case. In particular, we show that, for functions of constant "width", an \eps-approximate saddle point can be computed using O^*(\frac{(n+m)}{\eps^2}\ln R) random samples from log-concave distributions over the convex sets XX and YY. It is assumed that XX and YY have inscribed balls of radius 1/R1/R and circumscribing balls of radius RR. As a consequence, we obtain a simple randomized polynomial-time algorithm that computes such an approximation faster than known methods for problems with bounded width and when \eps \in (0,1) is a fixed, but arbitrarily small constant. Our main tool for achieving this result is the combination of the randomized fictitious play with the recently developed results on sampling from convex sets

    Multi-dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs

    Get PDF
    International audienceProving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters

    TRACO: Source-to-Source Parallelizing Compiler

    Get PDF
    The paper presents a source-to-source compiler, TRACO, for automatic extraction of both coarse- and fine-grained parallelism available in C/C++ loops. Parallelization techniques implemented in TRACO are based on the transitive closure of a relation describing all the dependences in a loop. Coarse- and fine-grained parallelism is represented with synchronization-free slices (space partitions) and a legal loop statement instance schedule (time partitions), respectively. TRACO enables also applying scalar and array variable privatization as well as parallel reduction. On its output, TRACO produces compilable parallel OpenMP C/C++ and/or OpenACC C/C++ code. The effectiveness of TRACO, efficiency of parallel code produced by TRACO, and the time of parallel code production are evaluated by means of the NAS Parallel Benchmark and Polyhedral Benchmark suites. These features of TRACO are compared with closely related compilers such as ICC, Pluto, Par4All, and Cetus. Feature work is outlined

    Computations of Uniform Recurrence Equations Using Minimal Memory Size

    Get PDF
    International audienceWe consider a system of uniform recurrence equations (URE) of dimension one. We show how its computation can be carried out using minimal memory size with several synchronous processors. This result is then applied to register minimization for digital circuits and parallel computation of task graphs

    Application of multiplicative weights update method in algorithmic game theory

    Get PDF
    In this thesis, we apply the Multiplicative Weights Update Method (MWUM) to the design of approximation algorithms for some optimization problems in game-theoretic settings. Lavi and Swamy {LS05,LS11} introduced a randomized mechanism for combinatorial auctions that uses an approximation algorithm for the underlying optimization problem, so-called social welfare maximization and converts the approximation algorithm to a randomized mechanism that is { truthful-in-expectation}, which means each player maximizes its expected utility by telling the truth. The mechanism is powerful (e.g., see {LS05,LS11,CEF10,HKV11} for applications), but unlikely to be efficient in practice, because it uses the Ellipsoid method. In chapter 2, we follow the general scheme suggested by Lavi and Swamy and replace the Ellipsoid method with MWUM. This results in a faster and simpler approximately truthful-in-expectation mechanism. We also extend their assumption regarding the existence of an exact solution for the LP-relaxation of social welfare maximization. We assume that there exists an approximation algorithm for the LP and establish a new randomized approximation mechanism. In chapter 3, we consider the problem of computing an approximate saddle point, or equivalently equilibrium, for a convex-concave functions F: X\times Y\to \RR, where XX and YY are convex sets of arbitrary dimensions. Our main contribution is the design of a randomized algorithm for computing an \eps-approximation saddle point for FF. Our algorithm is based on combining a technique developed by Grigoriadis and Khachiyan {GK95}, which is a randomized variant of Brown's fictitious play {B51}, with the recent results on random sampling from convex sets (see, e.g., {LV06,V05}). The algorithm finds an \eps-approximation saddle point in an expected number of O\left(\frac{\rho^2(n+m)}{\eps^{2}}\ln\frac{R}{\eps}\right) iterations, where in each iteration two points are sampled from log-concave distributions over strategy sets. It is assumed that XX and YY have inscribed balls of radius 1/R1/R and circumscribing balls of radius RR and ρ=max⁥x∈X,y∈Y∣F(x,y)∣\rho=\max_{x\in X, y\in Y} |F(x,y)|. In particular, the algorithm requires O^*\left(\frac{\rho^2(n+m)^6}{\eps^{2}}\ln{R}\right) calls to a membership oracle, where O∗(⋅)O^*(\cdot) suppresses polylogarithmic factors that depend on nn, mm, and \eps.In dieser Doktorarbeit verwenden wir die Multiplicative Weights Update Method (MWUM) fĂŒr den Entwurf von Approximationsalgorithmen fĂŒr bestimmte Optimierungsprobleme im spieltheoretischen Umfeld. Lavi und Swamy {LS05,LS11} prĂ€sentierten einen randomisierten Mechanismus fĂŒr kombinatorische Auktionen. Sie verwenden dazu einen Approximationsalgorithmus fĂŒr die Lösung des zugrundeliegenden Optimierungsproblem, das so genannte Social Welfare Maximization Problem, und wandeln diesen zu einem randomisierten Mechanismus um, der im Erwartungsfall anreizkompatibel ist. Dies bedeutet jeder Spieler erreicht den maximalen Gewinn, wenn er sich ehrlich verhĂ€lt. Der Mechanismus ist sehr mĂ€chtig (siehe {LS05,LS11,CEF10,HKV11} fĂŒr Anwendungen); trotzdem ist es unwahrscheinlich, dass er in der Praxis effizient ist, da hier die Ellipsoidmethode verwendet wird. In Kapitel 2 folgen wir dem von Lavi und Swamy vorgeschlagenem Schema und ersetzen die Ellipsoidmethode durch MWUM. Das Ergebnis ist ein schnellerer, einfacherer und im Erwartungsfall anreizkompatibler Approximationsmechanismus. Wir erweitern ihre Annahme zur Existenz einer exakten Lösung der LP-Relaxierung fĂŒr das Social Welfare Maximization Problem. Wir nehmen an, dass ein Approximationsalgorithmus fĂŒr das LP existiert und beschreiben darauf basierend einen neuen randomisierten Approximationsmechanismus. In Kapitel 3 betrachten wir das Problem fĂŒr konvexe und konkave Funktionen F:X×Y→RF:X\times Y\rightarrow\mathbb{R}, wobei XX und YY konvexe Mengen von beliebiger Dimension sind, einen Sattelpunkt zu approximieren (oder gleichbedeutend ein Equilibrium). Unser Hauptbeitrag ist der Entwurf eines randomisierten Algorithmus zur Berechnung einer Ï”\epsilon-NĂ€herung eines Sattelpunktes von FF. Unser Algorithmus beruht auf der Kombination einer Technik entwickelt durch Grigoriadis und Khachiyan {GK95}, welche eine zufallsbasierte Variation von Browns Fictitious Play {B51} ist, mit kĂŒrzlich erschienenen Resultaten im Bereich der zufĂ€lligen Stichprobennahme aus konvexen Mengen (siehe {LV06,V05}). Der Algorithmus findet eine Ï”\epsilon-NĂ€herung eines Sattelpunktes im Erwartungsfall in O(ρ2(n+m)6Ï”2log⁥RÏ”)O(\frac{\rho^{2}(n+m)^{6}}{\epsilon^{2}}\log\frac{R}{\epsilon}) Rechenschritten, wobei in jedem Rechenschritt zwei Punkte zufĂ€llig gemĂ€ĂŸ einer log-konkaven Verteilungen ĂŒber Strategiemengen gezogen werden. Hier nehmen wir an, dass XX und YY einbeschriebene Kugeln mit Radius 1/R1/R und umschreibende Kugeln von Radius R besitzen und ρ=max⁥x∈X,y∈Y∣F(x,y)∣\rho=\max_{x\in X,y\in Y}|F(x,y)|. Der Algorithmus benötigt dabei O∗(ρ2(n+m)6Ï”2log⁥R)O^{*}(\frac{\rho^{2}(n+m)^{6}}{\epsilon^{2}}\log R) Aufrufe eines Zugehörigkeitsorakels, hier versteckt O∗(⋅)O^{*}(\cdot) polylogarithmische Faktoren, die von n,mn,m und Ï”\epsilon abhĂ€ngen

    On the synthesis of integral and dynamic recurrences

    Get PDF
    PhD ThesisSynthesis techniques for regular arrays provide a disciplined and well-founded approach to the design of classes of parallel algorithms. The design process is guided by a methodology which is based upon a formal notation and transformations. The mathematical model underlying synthesis techniques is that of affine Euclidean geometry with embedded lattice spaces. Because of this model, computationally powerful methods are provided as an effective way of engineering regular arrays. However, at present the applicability of such methods is limited to so-called affine problems. The work presented in this thesis aims at widening the applicability of standard synthesis methods to more general classes of problems. The major contributions of this thesis are the characterisation of classes of integral and dynamic problems, and the provision of techniques for their systematic treatment within the framework of established synthesis methods. The basic idea is the transformation of the initial algorithm specification into a specification with data dependencies of increased regularity, so that corresponding regular arrays can be obtained by a direct application of the standard mapping techniques. We will complement the formal development of the techniques with the illustration of a number of case studies from the literature.EPSR

    Linear scheduling is nearly optimal

    No full text
    International audienceno abstrac

    LINEAR SCHEDULING IS NEARLY OPTIMAL

    No full text

    Linear Scheduling is Nearly Optimal

    No full text
    This paper deals with the problem of finding optimal schedulings for uniform dependence algorithms. Given a convex domain, let T f be the total time needed to execute all computations using the free (greedy) schedule and let T l be the total time needed to execute all computations using the optimal linear schedule. Our main result is to bound T l =T f and T l \Gamma T f for sufficiently "fat" domains. Keywords Uniform dependence algorithms; Convex domain; Free schedule; Linear schedule; Optimal schedule; Path packing. Supported by the Project C3 of the French Council for Research CNRS, and by the ESPRIT Basic Research Action 3280 "NANA" of the European Economic Community. Part of this work has been done while the first author was visiting the CS Department at Rutgers University in October 1991. 1 Introduction The pioneering work of Karp, Miller and Winograd [5] has considered a special class of algorithms characterized by uniform data dependencies and unit-time computations. T..
    corecore