6,760 research outputs found

    Static Determination of Allocation Rates to Support Real-Time Garbage Collection

    Get PDF
    While it is generally accepted that garbage-collected languages offer advantages over languages in which objects must be explicitly deallocated, real-time developers are leery of the adverse effects a garbage collector might have on real-time performance. Semiautomatic approaches based on regions have been proposed, but incorrect usage could cause unbounded storage leaks or program failure. Moreover, correct usage cannot be guaranteed at compile-time. Recently, real-time garbage collectors have been developed that provide a guaranteed fraction of the CPU to the application, and the correct operation of those collectors has been proven, subject only to the specification of certain statistics related to the type and rate of objects allocated by the application. However, unless those statistics are provided or estimated appropriately, the collector may fail to collect dead storage at a rate sufficient to pace the application’s need. Overspecification of those statistics is safe, but leaves the application with less than its possible share of the CPU, which may prevent the application from meeting its deadlines. In this thesis, we present a dynamic and static analysis of one such statistic, namely the real-time application’s memory allocation rate. The dynamic analysis highlights the variability of a program’s allocation rate. It also serves to quantify the conservatism of the statically computed upper bound. The static analysis is based on a data flow framework that requires interprocedural evaluation. We present the framework and results from analyzing some Java benchmarks from the jvm98 suite. Our work is a necessary step toward making real-time garbage collectors attractive to the hard-real-time community. By guaranteeing a bound on statistics provided to a real-time collector, we can guarantee the operation of the collector for a given application

    Unwoven Aspect Analysis

    Get PDF
    Various languages and tools supporting advanced separation of concerns (such as aspect-oriented programming) provide a software developer with the ability to separate functional and non-functional programmatic intentions. Once these separate pieces of the software have been specified, the tools automatically handle interaction points between separate modules, relieving the developer of this chore and permitting more understandable, maintainable code. Many approaches have left traditional compiler analysis and optimization until after the composition has been performed; unfortunately, analyses performed after composition cannot make use of the logical separation present in the original program. Further, for modular systems that can be configured with different sets of features, testing under every possible combination of features may be necessary and time-consuming to avoid bugs in production software. To solve this testing problem, we investigate a feature-aware compiler analysis that runs during composition and discovers features strongly independent of each other. When the their independence can be judged, the number of feature combinations that must be separately tested can be reduced. We develop this approach and discuss our implementation. We look forward to future programming languages in two ways: we implement solutions to problems that are conceptually aspect-oriented but for which current aspect languages and tools fail. We study these cases and consider what language designs might provide even more information to a compiler. We describe some features that such a future language might have, based on our observations of current language deficiencies and our experience with compilers for these languages

    Data cache organization for accurate timing analysis

    Get PDF

    Ada (trademark) projects at NASA. Runtime environment issues and recommendations

    Get PDF
    Ada practitioners should use this document to discuss and establish common short term requirements for Ada runtime environments. The major current Ada runtime environment issues are identified through the analysis of some of the Ada efforts at NASA and other research centers. The runtime environment characteristics of major compilers are compared while alternate runtime implementations are reviewed. Modifications and extensions to the Ada Language Reference Manual to address some of these runtime issues are proposed. Three classes of projects focusing on the most critical runtime features of Ada are recommended, including a range of immediately feasible full scale Ada development projects. Also, a list of runtime features and procurement issues is proposed for consideration by the vendors, contractors and the government

    Worst-case analysis of heap allocations

    Get PDF
    Abstract. In object oriented languages, dynamic memory allocation is a fundamental concept. When using such a language in hard real-time systems, it becomes important to bound both the worst-case execution time and the worst-case memory consumption. In this paper, we present an analysis to determine the worst-case heap allocations of tasks. The analysis builds upon techniques that are well established for worst-case execution time analysis. The difference is that the cost function is not the execution time of instructions in clock cycles, but the allocation in bytes. In contrast to worst-case execution time analysis, worst-case heap allocation analysis is not processor dependent. However, the cost function depends on the object layout of the runtime system. The analysis is evaluated with several real-time benchmarks to establish the usefulness of the analysis, and to compare the memory consumption of different object layouts.

    Assets, Markets and Poverty in Brazil

    Get PDF
    This paper establishes a basis of research on the relationships among poverty, resources distribution and assets markets operation. The main objective is to help the implementation of capital enhancing policies towards the poor. The strategy followed is to analyze three different types of impact that increasing the assets of the poor may have on social welfare. The first part of the paper evaluates the possession of different types of capital along the income distribution. This exercise can be perceived as an augmentation of income based poverty measures by incorporating the direct effect exerted by asset holdings on social welfare. The second part of the paper describes the income generating impact that asset holdings may have on poverty. It studies how the accumulation of different types of capital impact income-based poverty outcomes using logistic regressions. The third part studies the effect that increasing asset holdings of the poor has on improving poor individuals` ability to deal with adverse income shocks. This consists ofstudying the interactionsamong earnings dynamics, capital market imperfections and financial behavior, taking into account different time horizons. Long-run issues are related to the study of low frequency income fluctuations and life-cycle assets holdings using cohort analysis. Short-run issues are related to assessing the poor behavior and welfare losses in dealing with high frequency gaps between income and desired consumption. The analysis of earnings and poverty dynamics is conducted with panel data while qualitative data is used for the analysis of short-run household financial behavior.

    Performance analysis and optimization of the Java memory system

    Get PDF

    Subheap-Augmented Garbage Collection

    Get PDF
    Automated memory management avoids the tedium and danger of manual techniques. However, as no programmer input is required, no widely available interface exists to permit principled control over sometimes unacceptable performance costs. This dissertation explores the idea that performance-oriented languages should give programmers greater control over where and when the garbage collector (GC) expends effort. We describe an interface and implementation to expose heap partitioning and collection decisions without compromising type safety. We show that our interface allows the programmer to encode a form of reference counting using Hayes\u27 notion of key objects. Preliminary experimental data suggests that our proposed mechanism can avoid high overheads suffered by tracing collectors in some scenarios, especially with tight heaps. However, for other applications, the costs of applying subheaps---in human effort and runtime overheads---remain daunting
    corecore