1,089 research outputs found

    Concurrent Compaction in JVM Garbage Collection

    Get PDF
    This paper provides a brief overview of both garbage collection (GC) of memory and parallel processing. We then cover how parallel processing applies to GC. Specifically, these concepts are focused within the context of the Java Virtual Machine (JVM). With that foundation, we look at various algorithms that perform compaction of fragmented memory during the GC process. These algorithms are designed to run concurrent to the application running. Such concurrently compacting GC behavior stems from a desire to reduce \stop-the-world pauses of an application

    Numerical model for granular compaction under vertical tapping

    Full text link
    A simple numerical model is used to simulate the effect of vertical taps on a packing of monodisperse hard spheres. Our results are in agreement with an experimantal work done in Chicago and with other previous models, especially concerning the dynamics of the compaction, the influence of the excitation strength on the compaction efficiency, and some ageing effects. The principal asset of the model is that it allows a local analysis of the packings. Vertical and transverse density profiles are used as well as size and volume distributions of the pores. An interesting result concerns the appearance of a vertical gradient in the density profiles during compaction. Furthermore, the volume distribution of the pores suggests that the smallest pores, ranging in size between a tetrahedral and an octahedral site, are not strongly affected by the tapping process, in contrast to the largest pores which are more sensitive to the compaction of the packing.Comment: 8 pages, 15 figures (eps), to be published in Phys. Rev. E. Some corrections have been made, especially in paragraph IV

    LogBase: A Scalable Log-structured Database System in the Cloud

    Full text link
    Numerous applications such as financial transactions (e.g., stock trading) are write-heavy in nature. The shift from reads to writes in web applications has also been accelerating in recent years. Write-ahead-logging is a common approach for providing recovery capability while improving performance in most storage systems. However, the separation of log and application data incurs write overheads observed in write-heavy environments and hence adversely affects the write throughput and recovery time in the system. In this paper, we introduce LogBase - a scalable log-structured database system that adopts log-only storage for removing the write bottleneck and supporting fast system recovery. LogBase is designed to be dynamically deployed on commodity clusters to take advantage of elastic scaling property of cloud environments. LogBase provides in-memory multiversion indexes for supporting efficient access to data maintained in the log. LogBase also supports transactions that bundle read and write operations spanning across multiple records. We implemented the proposed system and compared it with HBase and a disk-based log-structured record-oriented system modeled after RAMCloud. The experimental results show that LogBase is able to provide sustained write throughput, efficient data access out of the cache, and effective system recovery.Comment: VLDB201

    A Fully Parallel LISP2 Compactor with Preservation of the Sliding Properties

    Full text link

    Utilizing the Linux Userfaultfd System Call in a Compaction Phase of a Garbage Collection Process

    Get PDF
    This publication describes techniques for utilizing the Linux userfaultfd system call in a garbage collection process performed concurrently with the execution of application threads (mutators) in a software application. During the garbage collection process, a stop-the-world pause occurs where currently mapped physical pages of a heap are moved to a temporary location (e.g., temp-space) and a new memory range of the heap is registered with userfaultfd. During a concurrent compaction phase of the garbage collection process, if a mutator accesses an area (e.g., a to-space page) in the heap that has not yet been processed by the garbage collector thread, and thus does not have a page allocated, the mutator will receive a SIGBUS signal (bus error) indicating a page fault. Responsive to receiving the registered page fault, a page buffer (e.g., 4KB page buffer) is created. All the reachable objects that should be located on the missing page are copied to the page buffer and the references inside these objects are updated to the corresponding new addresses. Finally, the userfaultfd input/output control (ioctl) system call is invoked by the user space to hand over the page buffer to the kernel, including an indication of the page to make visible on the faulting address. In response, the kernel can copy the contents of the page buffer to a page and map that page

    On-line construction of position heaps

    Get PDF
    We propose a simple linear-time on-line algorithm for constructing a position heap for a string [Ehrenfeucht et al, 2011]. Our definition of position heap differs slightly from the one proposed in [Ehrenfeucht et al, 2011] in that it considers the suffixes ordered from left to right. Our construction is based on classic suffix pointers and resembles the Ukkonen's algorithm for suffix trees [Ukkonen, 1995]. Using suffix pointers, the position heap can be extended into the augmented position heap that allows for a linear-time string matching algorithm [Ehrenfeucht et al, 2011].Comment: to appear in Journal of Discrete Algorithm
    corecore