8 research outputs found

    Characterization and reduction of memory usage in 64-bit Java Virtual Machines

    Get PDF

    Profileringstechnieken voor prestatieanalyse en optimalisatie van Javaprogramma's

    Get PDF

    Subheap-Augmented Garbage Collection

    Get PDF
    Automated memory management avoids the tedium and danger of manual techniques. However, as no programmer input is required, no widely available interface exists to permit principled control over sometimes unacceptable performance costs. This dissertation explores the idea that performance-oriented languages should give programmers greater control over where and when the garbage collector (GC) expends effort. We describe an interface and implementation to expose heap partitioning and collection decisions without compromising type safety. We show that our interface allows the programmer to encode a form of reference counting using Hayes\u27 notion of key objects. Preliminary experimental data suggests that our proposed mechanism can avoid high overheads suffered by tracing collectors in some scenarios, especially with tight heaps. However, for other applications, the costs of applying subheaps---in human effort and runtime overheads---remain daunting

    가상머신의 메모리 관리 최적화

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 2. 문수묵.Memory management is one of key components in virtual machine and also affects overall performance of virtual machine itself. Modern programming languages for virtual machine use dynamic memory allocation and objects are allocated dynamically to heap at a higher rate, such as Java. These allocated objects are reclaimed later when objects are not used anymore to secure free room in the heap for future objects allocation. Many virtual machines adopt garbage collection technique to reclaim dead objects in the heap. The heap can be also expanded itself to allocate more objects instead. Therefore overall performance of memory management is determined by object allocation technique, garbage collection and heap management technique. In this paper, three optimizing techniques are proposed to improve overall performance of memory management in virtual machine. First, a lazy-worst-fit object allocator is suggested to allocate small objects with little overhead in virtual machine which has a garbage collector. Then a biased allocator is proposed to improve the performance of garbage collector itself by reducing extra overhead of garbage collector. Finally an ahead-of-time heap expansion technique is suggested to improve user responsiveness as well as overall performance of memory management by suppressing invocation of garbage collection. Proposed optimizations are evaluated in various devices including desktop, embedded and mobile, with different virtual machines including Java virtual machine for Java runtime and Dalvik virtual machine for Android platform. A lazy-worst-fit allocator outperform other allocators including first-fit and lazy-worst-fit allocator and shows good fragmentation as low as rst-t allocator which is known to have the lowest fragmentation. A biased allocator reduces 4.1% of pause time caused by garbage collections in average. Ahead-of-time heap expansion reduces both number of garbage collections and total pause time of garbage collections. Pause time of GC reduced up to 31% in default applications of Android platform.Abstract i Contents iii List of Figures vi List of Tables viii Chapter 1 Introduction 1 1.1 The need of optimizing memory management . . . . . . . . . . . 2 1.2 Outline of the Dissertation . . . . . . . . . . . . . . . . . . . . . . 3 Chapter 2 Backgrounds 4 2.1 Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Memory management in virtual machine . . . . . . . . . . . . . . 5 Chapter 3 Lazy Worst Fit Allocator 7 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Allocation with fits . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3 Lazy fits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3.1 Lazy worst fit . . . . . . . . . . . . . . . . . . . . . . . . . 13 iii 3.4 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4.1 LWF implementation in the LaTTe Java virtual machine 14 3.4.2 Experimental environment . . . . . . . . . . . . . . . . . . 16 3.4.3 Performance of LWF . . . . . . . . . . . . . . . . . . . . . 17 3.4.4 Fragmentation of LWF . . . . . . . . . . . . . . . . . . . . 20 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Chapter 4 Biased Allocator 24 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3 Biased allocator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.3.1 When to choose an allocator . . . . . . . . . . . . . . . . 28 4.3.2 How to choose an allocator . . . . . . . . . . . . . . . . . 30 4.4 Analyses and implementation . . . . . . . . . . . . . . . . . . . . 32 4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.5.1 Total pause time of garbage collections . . . . . . . . . . . 36 4.5.2 Eect of each analysis . . . . . . . . . . . . . . . . . . . . 38 4.5.3 Pause time of each garbage collection . . . . . . . . . . . 38 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Chapter 5 Ahead-of-time Heap Management 42 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.3 Android . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.3.1 Garbage Collection . . . . . . . . . . . . . . . . . . . . . . 48 5.3.2 Heap expansion heuristic . . . . . . . . . . . . . . . . . . 49 5.4 Ahead-of-time heap expansion . . . . . . . . . . . . . . . . . . . . 51 5.4.1 Spatial heap expansion . . . . . . . . . . . . . . . . . . . . 53 iv 5.4.2 Temporal heap expansion . . . . . . . . . . . . . . . . . . 55 5.4.3 Launch-time heap expansion . . . . . . . . . . . . . . . . 56 5.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.5.1 Spatial heap expansion . . . . . . . . . . . . . . . . . . . . 58 5.5.2 Comparision of spatial heap expansion . . . . . . . . . . . 61 5.5.3 Temporal heap expansion . . . . . . . . . . . . . . . . . . 70 5.5.4 Launch-time heap expansion . . . . . . . . . . . . . . . . 72 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Chapter 6 Conculsion 74 Bibliography 75 요약 84 Acknowledgements 86Docto

    Eine Methode der effizienten und verifizierbaren Programmannotation für den Transport von Escape-Informationen

    Get PDF
    JIT compilation is frequently employed in order to speedup the execution of platform-independent and dynamically extensible mobile code applications. Since the time required for dynamic compilation directly influences a program's execution time, JIT compilers usually utilize only simple and fast techniques for program analysis and optimization. Program annotations can be used to improve the analysis and optimizitation process of a JIT compiler. Program annotations allow a mobile code system derive information about a program, on the producer side, and transmit that information along with the program to the consumer side. In this work, we present an inherently safe annotation technique for the safe transmission of escape information. The annotation technique described in this work is built on the SafeTSA mobile code format and is implemented as a simple extension of SafeTSA's type system. The space required for these annotations is minimal, and measurements of compilation time show that using information from an offline escape analysis in form of program annotations is evident faster than performing the escape analysis at runtime

    Garbage Collection Hints

    No full text
    This paper shows that Appel-style garbage collectors often make suboptimal decisions both in terms of when and how to collect

    GCH: Hints for triggering garbage collection

    No full text
    Abstract. This paper shows that Appel-style garbage collectors often make suboptimal decisions both in terms of when and how to collect. We argue that garbage collection should be done when the amount of live bytes is low (in order to minimize the collection cost) and when the amount of dead objects is high (in order to maximize the available heap size after collection). In addition, we observe that Appel-style collectors sometimes trigger a nursery collection in cases where a full-heap collection would have been better. Based on these observations, we propose garbage collection hints (GCH) which is a profile-directed method for guiding garbage collection. Offline profiling is used to identify favorable collection points in the program code. In those favorable collection points, the garbage collector dynamically chooses between nursery and full-heap collections based on an analytical garbage collector cost-benefit model. By doing so, GCH guides the collector in terms of when and how to collect. Experimental results using the SPECjvm98 benchmarks and two generational garbage collectors show that substantial reductions can be obtained in garbage collection time (up to 29X) and that the overall execution time can be reduced by more than 10%. In addition, we also show that GCH reduces the maximum pause times and outperforms user-inserted forced garbage collections.
    corecore