10,256 research outputs found

    ๊ฐ€์ƒ๋จธ์‹ ์˜ ๋ฉ”๋ชจ๋ฆฌ ๊ด€๋ฆฌ ์ตœ์ ํ™”

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2014. 2. ๋ฌธ์ˆ˜๋ฌต.Memory management is one of key components in virtual machine and also affects overall performance of virtual machine itself. Modern programming languages for virtual machine use dynamic memory allocation and objects are allocated dynamically to heap at a higher rate, such as Java. These allocated objects are reclaimed later when objects are not used anymore to secure free room in the heap for future objects allocation. Many virtual machines adopt garbage collection technique to reclaim dead objects in the heap. The heap can be also expanded itself to allocate more objects instead. Therefore overall performance of memory management is determined by object allocation technique, garbage collection and heap management technique. In this paper, three optimizing techniques are proposed to improve overall performance of memory management in virtual machine. First, a lazy-worst-fit object allocator is suggested to allocate small objects with little overhead in virtual machine which has a garbage collector. Then a biased allocator is proposed to improve the performance of garbage collector itself by reducing extra overhead of garbage collector. Finally an ahead-of-time heap expansion technique is suggested to improve user responsiveness as well as overall performance of memory management by suppressing invocation of garbage collection. Proposed optimizations are evaluated in various devices including desktop, embedded and mobile, with different virtual machines including Java virtual machine for Java runtime and Dalvik virtual machine for Android platform. A lazy-worst-fit allocator outperform other allocators including first-fit and lazy-worst-fit allocator and shows good fragmentation as low as rst-t allocator which is known to have the lowest fragmentation. A biased allocator reduces 4.1% of pause time caused by garbage collections in average. Ahead-of-time heap expansion reduces both number of garbage collections and total pause time of garbage collections. Pause time of GC reduced up to 31% in default applications of Android platform.Abstract i Contents iii List of Figures vi List of Tables viii Chapter 1 Introduction 1 1.1 The need of optimizing memory management . . . . . . . . . . . 2 1.2 Outline of the Dissertation . . . . . . . . . . . . . . . . . . . . . . 3 Chapter 2 Backgrounds 4 2.1 Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Memory management in virtual machine . . . . . . . . . . . . . . 5 Chapter 3 Lazy Worst Fit Allocator 7 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Allocation with fits . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3 Lazy fits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3.1 Lazy worst fit . . . . . . . . . . . . . . . . . . . . . . . . . 13 iii 3.4 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4.1 LWF implementation in the LaTTe Java virtual machine 14 3.4.2 Experimental environment . . . . . . . . . . . . . . . . . . 16 3.4.3 Performance of LWF . . . . . . . . . . . . . . . . . . . . . 17 3.4.4 Fragmentation of LWF . . . . . . . . . . . . . . . . . . . . 20 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Chapter 4 Biased Allocator 24 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3 Biased allocator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.3.1 When to choose an allocator . . . . . . . . . . . . . . . . 28 4.3.2 How to choose an allocator . . . . . . . . . . . . . . . . . 30 4.4 Analyses and implementation . . . . . . . . . . . . . . . . . . . . 32 4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.5.1 Total pause time of garbage collections . . . . . . . . . . . 36 4.5.2 Eect of each analysis . . . . . . . . . . . . . . . . . . . . 38 4.5.3 Pause time of each garbage collection . . . . . . . . . . . 38 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Chapter 5 Ahead-of-time Heap Management 42 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.3 Android . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.3.1 Garbage Collection . . . . . . . . . . . . . . . . . . . . . . 48 5.3.2 Heap expansion heuristic . . . . . . . . . . . . . . . . . . 49 5.4 Ahead-of-time heap expansion . . . . . . . . . . . . . . . . . . . . 51 5.4.1 Spatial heap expansion . . . . . . . . . . . . . . . . . . . . 53 iv 5.4.2 Temporal heap expansion . . . . . . . . . . . . . . . . . . 55 5.4.3 Launch-time heap expansion . . . . . . . . . . . . . . . . 56 5.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.5.1 Spatial heap expansion . . . . . . . . . . . . . . . . . . . . 58 5.5.2 Comparision of spatial heap expansion . . . . . . . . . . . 61 5.5.3 Temporal heap expansion . . . . . . . . . . . . . . . . . . 70 5.5.4 Launch-time heap expansion . . . . . . . . . . . . . . . . 72 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Chapter 6 Conculsion 74 Bibliography 75 ์š”์•ฝ 84 Acknowledgements 86Docto

    Overcommitment in Cloud Services -- Bin packing with Chance Constraints

    Full text link
    This paper considers a traditional problem of resource allocation, scheduling jobs on machines. One such recent application is cloud computing, where jobs arrive in an online fashion with capacity requirements and need to be immediately scheduled on physical machines in data centers. It is often observed that the requested capacities are not fully utilized, hence offering an opportunity to employ an overcommitment policy, i.e., selling resources beyond capacity. Setting the right overcommitment level can induce a significant cost reduction for the cloud provider, while only inducing a very low risk of violating capacity constraints. We introduce and study a model that quantifies the value of overcommitment by modeling the problem as a bin packing with chance constraints. We then propose an alternative formulation that transforms each chance constraint into a submodular function. We show that our model captures the risk pooling effect and can guide scheduling and overcommitment decisions. We also develop a family of online algorithms that are intuitive, easy to implement and provide a constant factor guarantee from optimal. Finally, we calibrate our model using realistic workload data, and test our approach in a practical setting. Our analysis and experiments illustrate the benefit of overcommitment in cloud services, and suggest a cost reduction of 1.5% to 17% depending on the provider's risk tolerance

    GNA: new framework for statistical data analysis

    Full text link
    We report on the status of GNA --- a new framework for fitting large-scale physical models. GNA utilizes the data flow concept within which a model is represented by a directed acyclic graph. Each node is an operation on an array (matrix multiplication, derivative or cross section calculation, etc). The framework enables the user to create flexible and efficient large-scale lazily evaluated models, handle large numbers of parameters, propagate parameters' uncertainties while taking into account possible correlations between them, fit models, and perform statistical analysis. The main goal of the paper is to give an overview of the main concepts and methods as well as reasons behind their design. Detailed technical information is to be published in further works.Comment: 9 pages, 3 figures, CHEP 2018, submitted to EPJ Web of Conference

    A Lazy Approach for Supporting Nested Transactions

    Get PDF
    Transactional memory (TM) is a compelling alternative to traditional synchronization, and implementing TM primitives directly in hardware offers a potential performance advantage over software-based methods. In this paper, we demonstrate that many of the actions associated with transaction abort and commit may be performed lazily -- that is, incrementally, and on demand. This technique is ideal for hardware, since it requires little space or work; in addition, it can improve performance by sparing accesses to committing or aborting locations from having to stall until the commit or abort completes. We further show that our lazy abort and commit technique supports open nesting and orElse, two language-level proposals which rely on transactional nesting. We also provide design notes for supporting lazy abort and commit on our own hardware TM system, based on VTM

    Automated data reduction workflows for astronomy

    Full text link
    Data from complex modern astronomical instruments often consist of a large number of different science and calibration files, and their reduction requires a variety of software tools. The execution chain of the tools represents a complex workflow that needs to be tuned and supervised, often by individual researchers that are not necessarily experts for any specific instrument. The efficiency of data reduction can be improved by using automatic workflows to organise data and execute the sequence of data reduction steps. To realize such efficiency gains, we designed a system that allows intuitive representation, execution and modification of the data reduction workflow, and has facilities for inspection and interaction with the data. The European Southern Observatory (ESO) has developed Reflex, an environment to automate data reduction workflows. Reflex is implemented as a package of customized components for the Kepler workflow engine. Kepler provides the graphical user interface to create an executable flowchart-like representation of the data reduction process. Key features of Reflex are a rule-based data organiser, infrastructure to re-use results, thorough book-keeping, data progeny tracking, interactive user interfaces, and a novel concept to exploit information created during data organisation for the workflow execution. Reflex includes novel concepts to increase the efficiency of astronomical data processing. While Reflex is a specific implementation of astronomical scientific workflows within the Kepler workflow engine, the overall design choices and methods can also be applied to other environments for running automated science workflows.Comment: 12 pages, 7 figure
    • โ€ฆ
    corecore