7 research outputs found

    Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis

    Full text link
    Even with impressive advances in automated formal methods, certain problems in system verification and synthesis remain challenging. Examples include the verification of quantitative properties of software involving constraints on timing and energy consumption, and the automatic synthesis of systems from specifications. The major challenges include environment modeling, incompleteness in specifications, and the complexity of underlying decision problems. This position paper proposes sciduction, an approach to tackle these challenges by integrating inductive inference, deductive reasoning, and structure hypotheses. Deductive reasoning, which leads from general rules or concepts to conclusions about specific problem instances, includes techniques such as logical inference and constraint solving. Inductive inference, which generalizes from specific instances to yield a concept, includes algorithmic learning from examples. Structure hypotheses are used to define the class of artifacts, such as invariants or program fragments, generated during verification or synthesis. Sciduction constrains inductive and deductive reasoning using structure hypotheses, and actively combines inductive and deductive reasoning: for instance, deductive techniques generate examples for learning, and inductive reasoning is used to guide the deductive engines. We illustrate this approach with three applications: (i) timing analysis of software; (ii) synthesis of loop-free programs, and (iii) controller synthesis for hybrid systems. Some future applications are also discussed

    Automatic prediction of computational resource consumption for efficient task migration in cloud

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2015. 2. ๋ฐฑ์œคํฅ.In order to accommodate the high demand for performance in smartphones, mobile cloud computing techniques, which aim to enhance a smartphone's performance through utilizing powerful cloud servers, were suggested. Among such techniques, execution offloading, which migrates a thread between a mobile device and a server, is often employed. In such execution offloading techniques, it is typical to dynamically decide what code part is to be offloaded through decision making algorithms. In order to achieve optimal offloading performance, however, the gain and cost of offloading must be predicted accurately for such algorithms. Previous works did not try hard to do this because it is usually expensive to make an accurate prediction. Moreover, existing schemes completely ignore the costs of cloud resources by assuming that idle servers are always available for free of charge. These unrealistic assumptions make each server run only a small load to achieve the guaranteed high offload performance. Therefore, these schemes cannot be applied to real-world commercial clouds which aim to minimize the operation costs by maximizing the server throughput, and then charge users for their resource usage. Thus in this dissertation, I present Mantis, a framework for predicting the Computational Resource Consumption(CRC) of Android applications on given inputs accurately, and efficiently. CRC synergistically combines techniques from program analysis and machine learning. It constructs concise CRC models by choosing from many program execution features only a handful that are most correlated with the program's CRC metric yet can be evaluated efficiently from the program's input. I apply program slicing to reduce evaluation time of a feature and automatically generate executable code snippets for efficiently evaluating features. Using the techniques, I empirically show they enhance the performance of offloading. Lately, I propose CMcloud, a novel cost-effective mobile-to-cloud offloading platform, which works nicely under the real-world cloud environments. CMcloud minimizes both the server costs and the user service fee by offloading as many mobile applications to a single server as possible, while satisfying the target performance of all applications.Abstract i Chapter 1 Introduction 1 1.1 Mobile Execution Offloading . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Dynamic Code Partitioning . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Cost-effectivity of Mobile Execution Offloading . . . . . . . . . . . . 3 1.4 Dissertation Contributions and Outline . . . . . . . . . . . . . . . . . 4 Chapter 2 Mantis: Efficient Predictions of Execution Time, Energy Usage, Memory Usage and Network Usage on Smart Mobile Devices 6 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3 Feature Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4 CRC Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.5 Predictor Code Generation . . . . . . . . . . . . . . . . . . . . . . . 15 2.5.1 Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.5.2 Slicer Challenges . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5.3 Slicer Design . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.6 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.7.1 Evaluation Environment . . . . . . . . . . . . . . . . . . . . 24 2.7.2 Experiment Results . . . . . . . . . . . . . . . . . . . . . . . 26 2.8 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Chapter 3 Precise Execution Offloading for Applications with Dynamic Behavior in Mobile Cloud Computing 40 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.2 Background & Motivation . . . . . . . . . . . . . . . . . . . . . . . 41 3.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.2.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3 f Mantis : Automatically generation of accurate and efficient performance predictor for mobile execution offloading . . . . . . . . . . . . 48 3.3.1 Performance predictor generation overview . . . . . . . . . . 49 3.3.2 Profiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3.3 Predictor Generator . . . . . . . . . . . . . . . . . . . . . . 50 3.4 Dynamic code partitioning with predictor generated by f Mantis . . . 52 3.4.1 Architecture for our solver . . . . . . . . . . . . . . . . . . . 52 3.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.5.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.5.2 Evaluation Environment . . . . . . . . . . . . . . . . . . . . 55 3.5.3 Experimental results . . . . . . . . . . . . . . . . . . . . . . 56 3.6 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Chapter 4 CMcloud: Cloud Platform for Cost-Effective Offloading of Mobile Applications 73 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.2 Backgrounds and Limitations . . . . . . . . . . . . . . . . . . . . . . 75 4.2.1 Basic Offload Mechanisms . . . . . . . . . . . . . . . . . . . 76 4.2.2 Limitations of Existing schemes . . . . . . . . . . . . . . . . 77 4.3 CMcloud offloading . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.3.1 Design Goals . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.3.2 Operation Model . . . . . . . . . . . . . . . . . . . . . . . . 80 4.3.3 Architecture Model . . . . . . . . . . . . . . . . . . . . . . . 82 4.4 CMcloud mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.4.1 Reference-model Server Profiling . . . . . . . . . . . . . . . 84 4.4.2 Performance Estimation . . . . . . . . . . . . . . . . . . . . 85 4.4.3 Performance Monitoring . . . . . . . . . . . . . . . . . . . . 92 4.4.4 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.4.5 Cost-aware Application Scheduling in Cloud . . . . . . . . . 94 4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.5.1 Estimating Target CPI stack . . . . . . . . . . . . . . . . . . 96 4.5.2 Predicting Instruction Count . . . . . . . . . . . . . . . . . . 98 4.5.3 Cost Effectiveness with QoS requirements . . . . . . . . . . . 98 4.5.4 Offloading/migration Overhead . . . . . . . . . . . . . . . . 102 4.6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Chapter 5 Conculsion 105 ์ดˆ๋ก 119 vDocto

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency
    corecore