196 research outputs found

    Designing Networks with Good Equilibria under Uncertainty

    Get PDF
    We consider the problem of designing network cost-sharing protocols with good equilibria under uncertainty. The underlying game is a multicast game in a rooted undirected graph with nonnegative edge costs. A set of k terminal vertices or players need to establish connectivity with the root. The social optimum is the Minimum Steiner Tree. We are interested in situations where the designer has incomplete information about the input. We propose two different models, the adversarial and the stochastic. In both models, the designer has prior knowledge of the underlying metric but the requested subset of the players is not known and is activated either in an adversarial manner (adversarial model) or is drawn from a known probability distribution (stochastic model). In the adversarial model, the designer's goal is to choose a single, universal protocol that has low Price of Anarchy (PoA) for all possible requested subsets of players. The main question we address is: to what extent can prior knowledge of the underlying metric help in the design? We first demonstrate that there exist graphs (outerplanar) where knowledge of the underlying metric can dramatically improve the performance of good network design. Then, in our main technical result, we show that there exist graph metrics, for which knowing the underlying metric does not help and any universal protocol has PoA of Ω(logk)\Omega(\log k), which is tight. We attack this problem by developing new techniques that employ powerful tools from extremal combinatorics, and more specifically Ramsey Theory in high dimensional hypercubes. Then we switch to the stochastic model, where each player is independently activated. We show that there exists a randomized ordered protocol that achieves constant PoA. By using standard derandomization techniques, we produce a deterministic ordered protocol with constant PoA.Comment: This version has additional results about stochastic inpu

    Space sharing job scheduling policies for parallel computers

    Get PDF
    The distinguishing characteristic of space sharing parallel job scheduling policies is that applications are allocated non-overlapping processor subsets. The interference among jobs is reduced, the synchronization delays and message latencies can be predictable, and distinct processors may be allocated to cooperating processes so as to avoid the overhead of context switches associated with traditional time-multiplexing;The processor allocation strategy, the job selection criteria, and workload characteristics are fundamental factors that influence system performance under space sharing. Allocation can be static or dynamic. The processor subset allocated to an application is fixed under static space sharing, whereas it can change during execution under dynamic space sharing. Static allocation can produce more predictable run times, permits a wide range of compiler optimizations (e.g., static data distribution and binding), and avoids the processor releases and reallocations associated with dynamic allocation. Its major problem is that it can induce high processor fragmentation;In this dissertation, alternative static and dynamic space sharing policies that differ in the allocation discipline and the job selection criteria are studied. The results show that significantly superior performance can be achieved under static space sharing if applications can be folded (i.e., allocated fewer processors than they requested). Folding typically increases program efficiency and can reduce processor fragmentation. Policies that increase folding with the system load are proposed and compared to schemes that use unconstrained folding, no folding, and fixed maximum folding factors. The adaptive policies produced higher and more stable system utilization, significantly shorter mean response times, and good fairness curves. However, unconstrained folding resulted in considerably more severe processor fragmentation than no folding. Its advantage is that it exploits the efficiency improvement that typically results when an application is allocated fewer processors. Consequently, it can produce shorter mean response times than no folding under medium to heavy loads;Also because of this efficiency improvement, dynamic policies that reduce waiting times by executing a large number of jobs simultaneously are more promising than schemes that limit the number of active jobs. However, limiting the number of active applications can be the superior approach when folding does not improve application efficiency

    The cost of conservative synchronization in parallel discrete event simulations

    Get PDF
    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor

    Designing and Valuating System on Dependability Analysis of Cluster-Based Multiprocessor System

    Get PDF
    Analysis of dependability is a significant stage in structuring and examining the safety of protection systems and computer systems. The introduction of virtual machines and multiprocessors leads to increasing the faults of the system, particularly for the failures that are software- induced, affecting the overall dependability. Also, it is different for the successful operation of the safety system at any dynamic stage, since there is a tremendous distinction in the rate of failure among the failures that are induced by the software and the hardware. Thus this paper presents a review or different dependability analysis techniques employed in multiprocessor system

    Algorithms for Scheduling Problems and Integer Programming

    Get PDF
    The first part of this thesis gives approximation results to scheduling problems. The classical makespan minimization problem on identical parallel machines asks for a distribution of a set of jobs to a set of machines such that the latest job completion time is minimized. For this strongly NP-complete problem we give a new EPTAS algorithm. In fact, it admits a practical implementation which beats the currently best approximation ratio of the MULTIFIT algorithm. A well-studied extension of the problem is the partition of the jobs into classes which impose a class-specific setup time on a machine whenever the processing switches to a job of a different class. For these so-called scheduling problems with batch setup times we present a 1.5-approximation algorithm for each of the three major settings. We achieve similar results for the likewise natural variant of many shared resources scheduling (MSRS) where instead of imposing a setup time each class is identified by a resource which can be occupied by at most one of its jobs at a time. For MSRS we present a 1.5-approximation and two EPTAS results. The second part provides results for fixed-priority uniprocessor real-time scheduling and variants of block-structured integer programming. We give a new approach to compute worst-case response times which admits a polynomial-time algorithm for harmonic periods even in the presence of task release jitters. In more detail, we prove a duality between Response Time Computation (RTC) and the Mixing Set problem. Furthermore, both problems can be expressed as block-structured integer programs which are closely related to simultaneous congruences. However, the setting of the famous Chinese Remainder Theorem is that each congruence has to have a certain remainder. We relax this setting such that the remainder of each congruence may lie in a given interval. We show that the smallest solution to these congruences can be computed in polynomial time if the set of divisors is harmonic
    corecore