273 research outputs found

    Oil and Water? High Performance Garbage Collection in Java with MMTk

    Get PDF
    Increasingly popular languages such as Java and C # require efficient garbage collection. This paper presents the design, implementation, and evaluation of MMTk, a Memory Management Toolkit for and in Java. MMTk is an efficient, composable, extensible, and portable framework for building garbage collectors. MMTk uses design patterns and compiler cooperation to combine modularity and efficiency. The resulting system is more robust, easier to maintain, and has fewer defects than monolithic collectors. Experimental comparisons with monolithic Java and C implementations reveal MMTk has significant performance advantages as well. Performance critical system software typically uses monolithic C at the expense of flexibility. Our results refute common wisdom that only this approach attains efficiency, and suggest that performance critical software can embrace modular design and high-level languages.

    Memory Mangement in the PoSSo Solver

    Get PDF
    AbstractA uniform general purpose garbage collector may not always provide optimal performance. Sometimes an algorithm exhibits a predictable pattern of memory usage that could be exploited, delaying as much as possible the intervention of the collector. This requires a collector whose strategy can be customized to the need of an algorithm. We present a dynamic memory management framework which allows such customization, while preserving the convenience of automatic collection in the normal case. The Customizable Memory Management (CMM) organizes memory in multiple heaps, each one encapsulating a particular storage discipline. The default heap for collectable objects uses the technique of mostly copying garbage collection, providing good performance and memory compaction. Customization of the collector is achieved through object orientation by specialising the collector methods for each heap class. We describe how the CMM has been exploited in the implementation of the Buchberger algorithm, by using a special heap for temporary objects created during polynomial reduction. The solution drastically reduces the overall cost of memory allocation in the algorithm

    A garbage collection design and bakeoff in JMTk: An efficient extensible Java memory management toolkit

    Get PDF
    In this paper, we describe the design, implementation, and evaluation of a new garbage collection framework called the Java Memory Management Toolkit (JMTk). The goals of JMTk are to provide an efficient, composable, extensible, and portable toolkit for quickly building and evaluating new and existing garbage collection algorithms. Our design clearly demarcates the external interface between the collector and the compiler for portability. For extensibility, JMTk provides a selection of allocators, garbage identification, collection, pointer tracking, and other mechanisms that are efficient and that a wide variety garbage collection algorithms can compose and share. For instance, our mark-sweep and reference counting collectors share the free list implementation. We perform a comprehensive and detailed study of collectors including copying, mark-sweep, reference counting, copying generational, and hybrid generational collectors using JMTk in Jikes RVM on a uniprocessor. We find that the performance of collectors in JMTk is comparable to the highly tuned original Jikes collectors. In a study of full heap and generational collectors, we confirm the significant benefits of generational collectors on a wide variety of heap sizes, and reveal that on very small heaps, collection time is enormous. These experiments add other new insights, such as firmly establishing that for a variety of generational collectors, a variable-size nursery which is allowed to grow to fill all the space not in use by the older generation performs better than a fixed-size nursery. We thus show the utility of extensive collector comparisons and establish the benefits of our design

    Subheap-Augmented Garbage Collection

    Get PDF
    Automated memory management avoids the tedium and danger of manual techniques. However, as no programmer input is required, no widely available interface exists to permit principled control over sometimes unacceptable performance costs. This dissertation explores the idea that performance-oriented languages should give programmers greater control over where and when the garbage collector (GC) expends effort. We describe an interface and implementation to expose heap partitioning and collection decisions without compromising type safety. We show that our interface allows the programmer to encode a form of reference counting using Hayes\u27 notion of key objects. Preliminary experimental data suggests that our proposed mechanism can avoid high overheads suffered by tracing collectors in some scenarios, especially with tight heaps. However, for other applications, the costs of applying subheaps---in human effort and runtime overheads---remain daunting

    Problems and Challenges when Building a Manager for Unused Objects

    No full text
    International audienceLarge object-oriented applications may occupy hundreds of megabytes or even gigabytes of memory. During program execution, a large graph of objects is created and constantly changed. Most object runtimes support some kind of automatic memory management based on garbage collectors (GC) whose idea is the automatic destruction of unreferenced objects. However, there are referenced objects which are not used for a long period of time or that are used just once. These are not garbage-collected because they are still reachable and might be used in the future. Due to these unused objects, applications use much more resources than they actually need. In this paper we present the challenges and possible approaches towards an unused object manager for Pharo. The goal is to use less memory by swapping out the unused objects to secondary memory and only leaving in primary memory only those objects which are needed and used. When one of the unused objects is needed, it is brought back into primary memory

    Unwoven Aspect Analysis

    Get PDF
    Various languages and tools supporting advanced separation of concerns (such as aspect-oriented programming) provide a software developer with the ability to separate functional and non-functional programmatic intentions. Once these separate pieces of the software have been specified, the tools automatically handle interaction points between separate modules, relieving the developer of this chore and permitting more understandable, maintainable code. Many approaches have left traditional compiler analysis and optimization until after the composition has been performed; unfortunately, analyses performed after composition cannot make use of the logical separation present in the original program. Further, for modular systems that can be configured with different sets of features, testing under every possible combination of features may be necessary and time-consuming to avoid bugs in production software. To solve this testing problem, we investigate a feature-aware compiler analysis that runs during composition and discovers features strongly independent of each other. When the their independence can be judged, the number of feature combinations that must be separately tested can be reduced. We develop this approach and discuss our implementation. We look forward to future programming languages in two ways: we implement solutions to problems that are conceptually aspect-oriented but for which current aspect languages and tools fail. We study these cases and consider what language designs might provide even more information to a compiler. We describe some features that such a future language might have, based on our observations of current language deficiencies and our experience with compilers for these languages

    The Future Smart-City: An Analysis of the Effects of Global and Technological Innovation on the Evolution of Economic Systems

    Get PDF
    In 21st century, the current economy is rapidly utilizing globalization to create a vastly different future. With the advent of new technology merging with entrepreneurs who effectively utilize that technology, the economic model is changing. Faster, sleeker, more effective forms of communication and information transfer drive the process of globalization. Production for a single product can happen in multiple countries, companies can operate virtually 24/7 through call centers halfway around the globe, and preliminary smart cities are beginning to emerge to give us a glimpse of the future world. A new category of businesspersons called “prosumers” is emerging and has created a new sharing and soon-to-be self-service economic structure. Analysis of the two drivers of economic change—globalization and technological innovation—will reveal how close civilization is to the city of the future

    SwitchWare: Accelerating Network Evolution (White Paper)

    Get PDF
    We propose the development of a set of software technologies ( SwitchWare ) which will enable rapid development and deployment of new network services. The key insight is that by making the basic network service selectable on a per user (or even per packet) basis, the need for formal standardization is eliminated. Additionally, by making the basic network service programmable, the deployment times, today constrained by capital funding limitations, are tremendously reduced (to the order of software distribution times). Finally, by constructing an advanced, robust programming environment, even the service development time can be reduced. A SwitchWare switch consists of input and output ports controlled by a software-programmable element; programs are contained in sequences of messages sent to the SwitchWare switch\u27s input ports, which interpret the messages as programs. We call these Switchlets . This accelerates the pace of network evolution, as evolving user needs can be immediately reflected in the network infrastructure. Immediate reconfigurability enhances the adaptability of the network infrastructure in the face of unexpected situations. We call a network built from SwitchWare switches an active network

    Problems and Challenges when Building a Manager for Unused Objects

    Get PDF
    International audienceLarge object-oriented applications may occupy hundreds of megabytes or even gigabytes of memory. During program execution, a large graph of objects is created and constantly changed. Most object runtimes support some kind of automatic memory management based on garbage collectors (GC) whose idea is the automatic destruction of unreferenced objects. However, there are referenced objects which are not used for a long period of time or that are used just once. These are not garbage-collected because they are still reachable and might be used in the future. Due to these unused objects, applications use much more resources than they actually need. In this paper we present the challenges and possible approaches towards an unused object manager for Pharo. The goal is to use less memory by swapping out the unused objects to secondary memory and only leaving in primary memory only those objects which are needed and used. When one of the unused objects is needed, it is brought back into primary memory
    corecore