3 research outputs found

    Exploiting the Weak Generational Hypothesis for Write Reduction and Object Recycling

    Get PDF
    Programming languages with automatic memory management are continuing to grow in popularity due to ease of programming. However, these languages tend to allocate objects excessively, leading to inefficient use of memory and large garbage collection and allocation overheads. The weak generational hypothesis notes that objects tend to die young in languages with automatic dynamic memory management. Much work has been done to optimize allocation and garbage collection algorithms based on this observation. Previous work has largely focused on developing efficient software algorithms for allocation and collection. However, much less work has studied architectural solutions. In this work, we propose and evaluate architectural support for assisting allocation and garbage collection. We first study the effects of languages with automatic memory management on the memory system. As objects often die young, it is likely many objects die while in the processor\u27s caches. Writes of dead data back to main memory are unnecessary, as the data will never be used again. To study this, we develop and present architecture support to identify dead objects while they remain resident in cache and eliminate any unnecessary writes. We show that many writes out of the caches are unnecessary, and can be avoided using our hardware additions. Next, we study the effects of using dead data in cache to assist with allocation and garbage collection. Logic is developed and presented to allow for reuse of cache space found dead to satisfy future allocation requests. We show that dead cache space can be recycled at a high rate, reducing pressure on the allocator and reducing cache miss rates. However, a full implementation of our initial approach is shown to be unscalable. We propose and study limitations to our approach, trading object coverage for scalability. Third, we present a new approach for identifying objects that die young based on a limitation of our previous approach. We show this approach has much lower storage and logic requirements and is scalable, while only slightly decreasing overall object coverage

    Exploration of Dynamic Memory

    Get PDF
    Since the advent of the Java programming language and the development of real-time garbage collection, Java has become an option for implementing real-time applications. The memory management choices provided by real-time garbage collection allow for real-time eJava developers to spend more of their time implementing real-time solutions. Unfortunately, the real-time community is not convinced that real-time garbage collection works in managing memory for Java applications deployed in a real-time context. Consequently, the Real-Time for Java Expert Group formulated the Real-Time Specification for Java (RTSJ) standards to make Java a real-time programming language. In lieu of garbage collection, the RTSJ proposed a new memory model called scopes, and a new type of thread called NoHeapRealTimeThread (NHRT), which takes advantage of scopes. While scopes and NHRTs promise predictable allocation and deallocation behaviors, no asymptotic studies have been conducted to investigate the costs associated with these technologies. To understand the costs associated with using these technologies to manage memory, computations and analyses of time and space overheads associated with scopes and NHRTs are presented. These results provide a framework for comparing the RTSJ’s memory management model with real-time garbage collection. Another facet of this research concerns the optimization of novel approaches to garbage collection on multiprocessor systems. Such approaches yield features that are suitable for real-time systems. Although multiprocessor, concurrent garbage collection is not the same as real-time garbage collection, advancements in multiprocessor concurrent garbage collection have demonstrated the feasibility of building low latency multiprocessor real-time garbage collectors. In the nineteen-sixties, only three garbage collection schemes were available, namely reference counting garbage collection, mark-sweep garbage collection, and copying garbage collection. These classical approaches gave new insight into the discipline of memory management and inspired researchers to develop new, more elaborate memory-management techniques. Those insights resulted in a plethora of automatic memory management algorithms and techniques, and a lack of uniformity in the language used to reason about garbage collection. To bring a sense of uniformity to the language used to reason about garbage collection technologies, a taxonomy for comparing garbage collection technologies is presented

    Investigation of design and execution alternatives for the committed choice non-deterministic logic languages

    Get PDF
    The general area of developing, applying and studying new and parallel models of computation is motivated by a need to overcome the limits of current Von Neumann based architectures. A key area of research in understanding how new technology can be applied to Al problem solving is through using logic languages. Logic programming languages provide a procedural interpretation for sentences of first order logic, mainly using a class of sentence called Horn clauses. Horn clauses are open to a wide variety of parallel evaluation models, giving possible speed-ups and alternative parallel models of execution. The research in this thesis is concerned with investigating one class of parallel logic language known as Committed Choice Non-Deterministic languages. The investigation considers the inherent parallel behaviour of Al programs implemented in the CCND languages and the effect of various alternatives open to language implementors and designers. This is achieved by considering how various Al programming techniques map to alternative language designs and the behaviour of these Al programs on alternative implementations of these languages. The aim of this work is to investigate how Al programming techniques are affected (qualitatively and quantitatively) by particular language features. The qualitative evaluation is a consideration of how Al programs can be mapped to the various CCND languages. The applications considered are general search algorithms (which focuses on the committed choice nature of the languages); chart parsing (which focuses on the differences between safe and unsafe languages); and meta-level inference (which focuses on the difference between deep and flat languages). The quantitative evaluation considers the inherent parallel behaviour of the resulting programs and the effect of possible implementation alternatives on this inherent behaviour. To carry out this quantitative evaluation we have implemented a system which improves on the current interpreter based evaluation systems. The new system has an improved model of execution and allows severa
    corecore