107,744 research outputs found

    Prior-Independent Mechanisms for Scheduling

    Full text link
    We study the makespan minimization problem with unrelated selfish machines under the assumption that job sizes are stochastic. We design simple truthful mechanisms that under various distributional assumptions provide constant and sublogarithmic approximations to expected makespan. Our mechanisms are prior-independent in that they do not rely on knowledge of the job size distributions. Prior-independent approximation mechanisms have been previously studied for the objective of revenue maximization [Dhangwatnotai, Roughgarden and Yan'10, Devanur, Hartline, Karlin and Nguyen'11, Roughgarden, Talgam-Cohen and Yan'12]. In contrast to our results, in prior-free settings no truthful anonymous deterministic mechanism for the makespan objective can provide a sublinear approximation [Ashlagi, Dobzinski and Lavi'09].Comment: This paper will appear in Proceedings of the ACM Symposium on Theory of Computing 2013 (STOC'13

    The VCG Mechanism for Bayesian Scheduling

    Get PDF
    We study the problem of scheduling m tasks to n selfish, unrelated machines in order to minimize the makespan, in which the execution times are independent random variables, identical across machines. We show that the VCG mechanism, which myopically allocates each task to its best machine, achieves an approximation ratio of O(ln n&frac; ln ln n). This improves significantly on the previously best known bound of O(m/n) for prior-independent mechanisms, given by Chawla et al. [7] under the additional assumption of Monotone Hazard Rate (MHR) distributions. Although we demonstrate that this is tight in general, if we do maintain the MHR assumption, then we get improved, (small) constant bounds for m ≥ n ln n i.i.d. tasks. We also identify a sufficient condition on the distribution that yields a constant approximation ratio regardless of the number of tasks

    Abstract Timers and their Implementation onto the ARM Cor tex-M family of MCUs

    Get PDF
    Presented at Embed with Linux Workshop (EWiLi 2015). 4 to 9, Oct, 2015. Amsterdam, Netherlands.Real-Time For the Masses (RTFM) is a set of languages andto ols b eing develop ed to facilitate emb edded software development and provide highly efficient implementations gearedto static verification. The RTFM-kernel is an architecturedesigned to provide highly efficient and predicable Stack Resource Policy based scheduling, targeting bare metal (singlecore) platforms.We contribute b eyond prior work by intro ducing a platform independent timer abstraction that relies on existingRTFM-kernel primitives. We develop two alternative implementations for the ARM Cortex-M family of MCUs: ageneric implementation, using the ARM defined SysTick-/DWT hardware; and a target sp ecific implementation, using the match compare/free running timers. While sacrificing generality, the latter is more flexible and may reduceoverall overhead. Invariants for correctness are presented,and metho ds to static and run-time verification are discussed. Overhead is b ound and characterized. In b oth casesthe critical section from release time to dispatch is less than2us on a 100MHz MCU. Queue and timer mechanisms aredirectly implemented in the RTFM-core language and canb e included in system-wide scheduling analysis

    Truth and Regret in Online Scheduling

    Full text link
    We consider a scheduling problem where a cloud service provider has multiple units of a resource available over time. Selfish clients submit jobs, each with an arrival time, deadline, length, and value. The service provider's goal is to implement a truthful online mechanism for scheduling jobs so as to maximize the social welfare of the schedule. Recent work shows that under a stochastic assumption on job arrivals, there is a single-parameter family of mechanisms that achieves near-optimal social welfare. We show that given any such family of near-optimal online mechanisms, there exists an online mechanism that in the worst case performs nearly as well as the best of the given mechanisms. Our mechanism is truthful whenever the mechanisms in the given family are truthful and prompt, and achieves optimal (within constant factors) regret. We model the problem of competing against a family of online scheduling mechanisms as one of learning from expert advice. A primary challenge is that any scheduling decisions we make affect not only the payoff at the current step, but also the resource availability and payoffs in future steps. Furthermore, switching from one algorithm (a.k.a. expert) to another in an online fashion is challenging both because it requires synchronization with the state of the latter algorithm as well as because it affects the incentive structure of the algorithms. We further show how to adapt our algorithm to a non-clairvoyant setting where job lengths are unknown until jobs are run to completion. Once again, in this setting, we obtain truthfulness along with asymptotically optimal regret (within poly-logarithmic factors)

    Average-case Approximation Ratio of Scheduling without Payments

    Full text link
    Apart from the principles and methodologies inherited from Economics and Game Theory, the studies in Algorithmic Mechanism Design typically employ the worst-case analysis and approximation schemes of Theoretical Computer Science. For instance, the approximation ratio, which is the canonical measure of evaluating how well an incentive-compatible mechanism approximately optimizes the objective, is defined in the worst-case sense. It compares the performance of the optimal mechanism against the performance of a truthful mechanism, for all possible inputs. In this paper, we take the average-case analysis approach, and tackle one of the primary motivating problems in Algorithmic Mechanism Design -- the scheduling problem [Nisan and Ronen 1999]. One version of this problem which includes a verification component is studied by [Koutsoupias 2014]. It was shown that the problem has a tight approximation ratio bound of (n+1)/2 for the single-task setting, where n is the number of machines. We show, however, when the costs of the machines to executing the task follow any independent and identical distribution, the average-case approximation ratio of the mechanism given in [Koutsoupias 2014] is upper bounded by a constant. This positive result asymptotically separates the average-case ratio from the worst-case ratio, and indicates that the optimal mechanism for the problem actually works well on average, although in the worst-case the expected cost of the mechanism is Theta(n) times that of the optimal cost

    Collision Helps - Algebraic Collision Recovery for Wireless Erasure Networks

    Full text link
    Current medium access control mechanisms are based on collision avoidance and collided packets are discarded. The recent work on ZigZag decoding departs from this approach by recovering the original packets from multiple collisions. In this paper, we present an algebraic representation of collisions which allows us to view each collision as a linear combination of the original packets. The transmitted, colliding packets may themselves be a coded version of the original packets. We propose a new acknowledgment (ACK) mechanism for collisions based on the idea that if a set of packets collide, the receiver can afford to ACK exactly one of them and still decode all the packets eventually. We analytically compare delay and throughput performance of such collision recovery schemes with other collision avoidance approaches in the context of a single hop wireless erasure network. In the multiple receiver case, the broadcast constraint calls for combining collision recovery methods with network coding across packets at the sender. From the delay perspective, our scheme, without any coordination, outperforms not only a ALOHA-type random access mechanisms, but also centralized scheduling. For the case of streaming arrivals, we propose a priority-based ACK mechanism and show that its stability region coincides with the cut-set bound of the packet erasure network

    Improving DRAM Performance by Parallelizing Refreshes with Accesses

    Full text link
    Modern DRAM cells are periodically refreshed to prevent data loss due to leakage. Commodity DDR DRAM refreshes cells at the rank level. This degrades performance significantly because it prevents an entire rank from serving memory requests while being refreshed. DRAM designed for mobile platforms, LPDDR DRAM, supports an enhanced mode, called per-bank refresh, that refreshes cells at the bank level. This enables a bank to be accessed while another in the same rank is being refreshed, alleviating part of the negative performance impact of refreshes. However, there are two shortcomings of per-bank refresh. First, the per-bank refresh scheduling scheme does not exploit the full potential of overlapping refreshes with accesses across banks because it restricts the banks to be refreshed in a sequential round-robin order. Second, accesses to a bank that is being refreshed have to wait. To mitigate the negative performance impact of DRAM refresh, we propose two complementary mechanisms, DARP (Dynamic Access Refresh Parallelization) and SARP (Subarray Access Refresh Parallelization). The goal is to address the drawbacks of per-bank refresh by building more efficient techniques to parallelize refreshes and accesses within DRAM. First, instead of issuing per-bank refreshes in a round-robin order, DARP issues per-bank refreshes to idle banks in an out-of-order manner. Furthermore, DARP schedules refreshes during intervals when a batch of writes are draining to DRAM. Second, SARP exploits the existence of mostly-independent subarrays within a bank. With minor modifications to DRAM organization, it allows a bank to serve memory accesses to an idle subarray while another subarray is being refreshed. Extensive evaluations show that our mechanisms improve system performance and energy efficiency compared to state-of-the-art refresh policies and the benefit increases as DRAM density increases.Comment: The original paper published in the International Symposium on High-Performance Computer Architecture (HPCA) contains an error. The arxiv version has an erratum that describes the error and the fix for i
    corecore