471 research outputs found

    Approximating the longest path length of a stochastic DAG by a normal distribution in linear time

    Get PDF
    AbstractThis paper presents a linear time algorithm for approximating, in the sense below, the longest path length of a given directed acyclic graph (DAG), where each edge length is given as a normally distributed random variable. Let F(x) be the distribution function of the longest path length of the DAG. Our algorithm computes the mean and the variance of a normal distribution whose distribution function Fหœ(x) satisfies Fหœ(x)โฉฝF(x) as long as F(x)โฉพa, given a constant a (1/2โฉฝa<1). In other words, it computes an upper bound 1โˆ’Fหœ(x) on the tail probability 1โˆ’F(x), provided xโฉพFโˆ’1(a). To evaluate the accuracy of the approximation of F(x) by Fหœ(x), we first conduct two experiments using a standard benchmark set ITC'99 of logical circuits, since a typical application of the algorithm is the delay analysis of logical circuits. We also perform a worst case analysis to derive an upper bound on the difference Fหœโˆ’1(a)โˆ’Fโˆ’1(a)

    Network Interdiction Using Adversarial Traffic Flows

    Full text link
    Traditional network interdiction refers to the problem of an interdictor trying to reduce the throughput of network users by removing network edges. In this paper, we propose a new paradigm for network interdiction that models scenarios, such as stealth DoS attack, where the interdiction is performed through injecting adversarial traffic flows. Under this paradigm, we first study the deterministic flow interdiction problem, where the interdictor has perfect knowledge of the operation of network users. We show that the problem is highly inapproximable on general networks and is NP-hard even when the network is acyclic. We then propose an algorithm that achieves a logarithmic approximation ratio and quasi-polynomial time complexity for acyclic networks through harnessing the submodularity of the problem. Next, we investigate the robust flow interdiction problem, which adopts the robust optimization framework to capture the case where definitive knowledge of the operation of network users is not available. We design an approximation framework that integrates the aforementioned algorithm, yielding a quasi-polynomial time procedure with poly-logarithmic approximation ratio for the more challenging robust flow interdiction. Finally, we evaluate the performance of the proposed algorithms through simulations, showing that they can be efficiently implemented and yield near-optimal solutions

    Parallel Real-Time Scheduling for Latency-Critical Applications

    Get PDF
    In order to provide safety guarantees or quality of service guarantees, many of today\u27s systems consist of latency-critical applications, e.g. applications with timing constraints. The problem of scheduling multiple latency-critical jobs on a multiprocessor or multicore machine has been extensively studied for sequential (non-parallizable) jobs and different system models and different objectives have been considered. However, the computational requirement of a single job is still limited by the capacity of a single core. To provide increasingly complex functionalities of applications and to complete their higher computational demands within the same or even more stringent timing constraints, we must exploit the internal parallelism of jobs, where individual jobs are parallel programs and can potentially utilize more than one core in parallel. However, there is little work considering scheduling multiple parallel jobs that are latency-critical. This dissertation focuses on developing new scheduling strategies, analysis tools, and practical platform design techniques to enable efficient and scalable parallel real-time scheduling for latency-critical applications on multicore systems. In particular, the research is focused on two types of systems: (1) static real-time systems for tasks with deadlines where the temporal properties of the tasks that need to execute is known a priori and the goal is to guarantee the temporal correctness of the tasks prior to their executions; and (2) online systems for latency-critical jobs where multiple jobs arrive over time and the goal to optimize for a performance objective of jobs during the execution. For static real-time systems for parallel tasks, several scheduling strategies, including global earliest deadline first, global rate monotonic and a novel federated scheduling, are proposed, analyzed and implemented. These scheduling strategies have the best known theoretical performance for parallel real-time tasks under any global strategy, any fixed priority scheduling and any scheduling strategy, respectively. In addition, federated scheduling is generalized to systems with multiple criticality levels and systems with stochastic tasks. Both numerical and empirical experiments show that federated scheduling and its variations have good schedulability performance and are efficient in practice. For online systems with multiple latency-critical jobs, different online scheduling strategies are proposed and analyzed for different objectives, including maximizing the number of jobs meeting a target latency, maximizing the profit of jobs, minimizing the maximum latency and minimizing the average latency. For example, a simple First-In-First-Out scheduler is proven to be scalable for minimizing the maximum latency. Based on this theoretical intuition, a more practical work-stealing scheduler is developed, analyzed and implemented. Empirical evaluations indicate that, on both real world and synthetic workloads, this work-stealing implementation performs almost as well as an optimal scheduler

    The value of information in shortest path optimization/

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 92-93).Information about a random event (termed the source) is typically treated as a (possibly noisy) function of that event. Information has a destination, an agent, that uses the information to make a decision. In traditional communication systems design, it is usually assumed that the agent uses the information to produce an estimate of the source, and that estimate is in turn used to make the decision. Consequently, the typical objective of communication-systems design is to construct the communication system so that the joint distribution between the source and the information is "optimal" in the sense that it minimizes the average error of the estimate. Due to resource limitations such as cost, power, or time, estimation quality is constrained in the sense that the set of allowable joint distribution is bounded in mutual information. In the context of an agent using information to make decisions, however, such metrics may not be appropriate. In particular, the true value of information is determined by how it impacts the average payoff of the agent's decisions, not its estimation accuracy. To this end, mutual information may not the most convenient measure of information quantity since its relationship to decision quality may be very complicated, making it difficult to develop algorithms for information optimization. In this thesis, we study the value of information in an instance of an uncertain decision framework: shortest path optimization on a graph with random edge weights.(cont.) Specifically, we consider an agent that seeks to traverse the shortest path of a graph subject to some side information it receives about the edge weights in advance of and during its travel. In this setting, decision quality is determined by the average length of the paths the agent chooses, not how often the agent decodes the optimal path. For this application, we define and quantify a notion of information that is compatible with this problem, bound the performance of the agent subject to a bound on the amount of information available to it, study the impact of spreading information sequentially over partial decisions, and provide algorithms for information optimization. Meaningful, analytic performance bounds and practical algorithms for information optimization are obtained by leveraging a new type of geometric graph reduction for shortest path optimization as well as an abstraction of the geometry of sequential decision making.by Michael David Rinehart.Ph.D

    Persistency and Stein's Identity: Applications in Stochastic Discrete Optimization Problems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    ๋ฉ€ํ‹ฐ์ฝ”์–ด ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ์—์„œ์˜ ์ž๋™์ฐจ ์‹œ์Šคํ…œ์„ ์œ„ํ•œ ๊ธฐ๋Šฅ์ /์‹œ๊ฐ„์  ์ •ํ™•์„ฑ ๋ณด์žฅ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. ์ด์ฐฝ๊ฑด.This dissertation presents functionally and temporally correct simulation method for cyber-side of an automotive system on multicore simulator. To overcome the limitations of the existing simulation methods which do not correctly model temporal behaviours such as varying execution times and task preemptions, the novel simulation technique assuming single core simulator was proposed. In this work, we extend the single core simulator to the multicore while keeping all of the proposed key ideas to guarantee correct simulation. We introduce heuristic task partitioning algorithm based on memory usages and approximated task-wise blockings of simulated tasks. As a result, we could improve up to 97%p, 15%p of simulation capacity compared to the single core, and other task partitioning algorithms, respectively.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋ฉ€ํ‹ฐ ์ฝ”์–ด ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ž๋™์ฐจ ์‚ฌ์ด๋ฒ„-๋ฌผ๋ฆฌ ์‹œ์Šคํ…œ์˜ ์‚ฌ์ด๋ฒ„ ์‹œ์Šคํ…œ์„ ๊ธฐ๋Šฅ์ /์‹œ๊ฐ„์ ์œผ๋กœ ์ •ํ™•ํ•˜๊ฒŒ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•˜๊ธฐ ์œ„ํ•œ ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•œ๋‹ค. ์•ž์„  ์—ฐ๊ตฌ์—์„œ๋Š” ์‹œ์Šคํ…œ์˜ ๊ธฐ๋Šฅ์  ํ–‰ํƒœ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ํƒœ์Šคํฌ์˜ ๊ฐ€๋ณ€ ์ˆ˜ํ–‰ ์‹œ๊ฐ„, ์ž์› ์„ ์  ๋“ฑ๊ณผ ๊ฐ™์€ ์‹œ๊ฐ„์  ํ–‰ํƒœ ์—ญ์‹œ ํ•จ๊ป˜ ์ •ํ™•ํžˆ ๋ชจ์‚ฌํ•˜๊ธฐ ์œ„ํ•œ ์ƒˆ๋กœ์šด ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๊ธฐ๋ฒ•๋“ค์ด ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์•ž์„  ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆ๋œ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๊ธฐ๋ฒ•๋“ค์ด ์‹ฑ๊ธ€์ฝ”์–ด ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ๋งŒ์„ ๊ฐ€์ •ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ์ ์— ์ฐฉ์•ˆํ•˜์—ฌ ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ •ํ™•ํ•œ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ๋ณด์žฅํ•˜๊ธฐ ์œ„ํ•ด ์ œ์•ˆ๋˜์—ˆ๋˜ ๊ธฐ์กด ์—ฐ๊ตฌ์˜ ์ฃผ์š” ์•„์ด๋””์–ด๋ฅผ ๋ชจ๋‘ ์œ ์ง€ํ•˜๋ฉด์„œ ์‹ฑ๊ธ€์ฝ”์–ด ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ๋ฅผ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ๋กœ ํ™•์žฅํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์—์„œ๋Š” ๊ฐ ํƒœ์Šคํฌ์˜ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰๊ณผ ๊ทผ์‚ฌํ™” ๋œ ํƒœ์Šคํฌ ๊ฐ„ ๋ธ”๋กœํ‚น ๊ฐ’์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๋Œ€์ƒ ํƒœ์Šคํฌ์— ๋Œ€ํ•œ ํœด๋ฆฌ์Šคํ‹ฑ ํƒœ์Šคํฌ ๋ถ„ํ•  ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์„ค๊ณ„ํ•œ๋‹ค. ๋˜ํ•œ, ์ž„์˜์ ์œผ๋กœ ์ƒ์„ฑํ•œ ๋‹ค์ˆ˜์˜ ์›Œํฌ๋กœ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์„ฑ๋Šฅ์„ ์ธก์ •ํ•˜๊ณ , ์ด๋ฅผ ํ†ตํ•ด ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์‹ฑ๊ธ€์ฝ”์–ด ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ ๋ฐ ๋‹ค๋ฅธ ํƒœ์Šคํฌ ๋ถ„ํ•  ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋น„ํ•ด ๊ฐ๊ฐ ์ตœ๋Œ€ 97%p, 15%p์˜ ํ–ฅ์ƒ๋œ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์šฉ๋Ÿ‰์„ ๊ฐ–๋Š” ๊ฒƒ์„ ๋ณด์ธ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ์ œ์•ˆํ•˜๋Š” ๋ฉ€ํ‹ฐ์ฝ”์–ด ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ๋Š” ์•ž์„  ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆ๋˜์—ˆ๋˜ ๊ธฐ๋Šฅ์ /์‹œ๊ฐ„์  ์ •ํ™•์„ฑ์„ ๋™์ผํ•˜๊ฒŒ ๋ณด์žฅํ•จ๊ณผ ๋™์‹œ์— ๋ณด๋‹ค ๋†’์€ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์šฉ๋Ÿ‰์„ ์ œ๊ณตํ•จ์œผ๋กœ์จ ์ „์ฒด ์ž๋™์ฐจ ์‹œ์Šคํ…œ์˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์— ํšจ๊ณผ์ ์œผ๋กœ ํ™œ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค.1 Introduction 1 1.1 Motivation and Objective 1 1.2 Approach 1 1.3 Organization 2 2 Related Work 3 2.1 Model-Based Simulations 3 2.2 Real-time Execution Platforms 3 2.3 Functionally and Temporally Correct Simulations 3 3 Background 5 3.1 Description on the real cyber-system 5 3.2 Description on the simulated cyber-system 7 3.3 Idea of Functionally and Temporally Correct Simulation 9 4 Problem Description 12 4.1 Keeping the key ideas of the single core simulator 12 4.2 Maximally utilizing the multicore 13 5 Proposed Approach 15 5.1 Memory constraint 15 5.2 The Smallest-blocking-first heuristic 18 5.2.1 Intuition of Smallest-blocking-first algorithm 19 5.2.2 Finding the Expected Earliest Start Time 20 5.2.3 Finding the Expected Latest Finish Time 22 5.2.4 Weighting the [EEST, ELFT] intervals 25 6 Evaluation 28 6.1 Simulatability according to the number of cores 28 6.2 Simulatability according to the partitioning method 30 6.3 Simulatability according to the physical read/write task ratio 31 7 Conclusion 35 References 37Maste

    Variability-Aware VLSI Design Automation For Nanoscale Technologies

    Get PDF
    As technology scaling enters the nanometer regime, design of large scale ICs gets more challenging due to shrinking feature sizes and increasing design complexity. Aggressive scaling causes significant degradation in reliability, increased susceptibility to fabrication and environmental randomness and increased dynamic and leakage power dissipation. In this work, we investigate these scaling issues in large scale integrated systems. This dissertation proposes to develop variability-aware design methodologies by proposing design analysis, design-time optimization, post-silicon tunability and runtime-adaptivity based optimization techniques for handling variability. We discuss our research in the area of variability-aware analysis, specifically focusing on the problem of statistical timing analysis. The first technique presents the concept of error budgeting that achieves significant runtime speedups during statistical timing analysis. The second work presents a general framework for non-linear non-Gaussian statistical timing analysis considering correlations. Further, we present our work on design-time optimization schemes that are applicable during physical synthesis. Firstly, we present a buffer insertion technique that considers wire-length uncertainty and proposes algorithms to perform probabilistic buffer insertion. Secondly, we present a stochastic optimization framework based on Monte-Carlo technique considering fabrication variability. This optimization framework can be applied to problems that can be modeled as linear programs without without imposing any assumptions on the nature of the variability. Subsequently, we present our work on post-silicon tunability based design optimization. This work presents a design management framework that can be used to balance the effort spent on pre-silicon (through gate sizing) and post-silicon optimization (through tunable clock-tree buffers) while maximizing the yield gains. Lastly, we present our work on variability-aware runtime optimization techniques. We look at the problem of runtime supply voltage scaling for dynamic power optimization, and propose a framework to consider the impact of variability on the reliability of such designs. We propose a probabilistic design synthesis technique where reliability of the design is a primary optimization metric
    • โ€ฆ
    corecore