120,687 research outputs found

    MULTIPLE TIER LOW OVERHEAD MEMORY LEAK DETECTOR

    Get PDF
    A memory leak detector system can be used to detect memory leaks, which is when a computer program fails to release unneeded memory allocations, in a computer that executes multiple programs. The system utilizes a multi tier methodology to detect memory leaks. In a first tier, the system collects a histogram representing allocation counts for different allocation sizes of memory at the computer. If the system detects an above-a-threshold increase in the number of allocations for one or more of the allocation sizes, the system marks the one or more allocation sizes as suspected leaks and proceeds to a second tier of the multiple tier method. In the second tier, the system collects a histogram based on call stacks that led to each above-a-threshold increase in allocation sizes detected in the first tier. The system marks the call stacks with an above-a-threshold increase in call stack traces as prospective leaks and proceeds to a third tier of the multiple tier leak detection method. In the third tier, the system records the allocation times of each memory allocation that fits the suspected leak profile, including leak sizes found in the first tier and call stacks found in the second tier. If the oldest allocations are not being freed and persist over a period of time, then the system marks the allocation(s), the allocation size(s), and the originating call stack(s) as a probable memory leak

    Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysis

    Get PDF
    The growing complexity of modern computer architectures increasingly complicates the prediction of the run-time behavior of software. For real-time systems, where a safe estimation of the program\u27s worst-case execution time is needed, time-predictable computer architectures promise to resolve this problem. A stack cache, for instance, allows the compiler to efficiently cache a program\u27s stack, while static analysis of its behavior remains easy. Likewise, its implementation requires little hardware overhead. This work introduces an optimization of the standard stack cache to avoid redundant spilling of the cache content to main memory, if the content was not modified in the meantime. At first sight, this appears to be an average-case optimization. Indeed, measurements show that the number of cache blocks spilled is reduced to about 17% and 30% in the mean, depending on the stack cache size. Furthermore, we show that lazy spilling can be analyzed with little extra effort, which benefits the worst-case spilling behavior that is relevant for a real-time system

    Impact of an IMA software architecture on legacy avionic software

    Get PDF
    International audienceThe paper discusses the performance and timing issues when migrating legacy avionics software to an Integrated Modular Avionic (IMA). The software in question is running on a mission computer equipped with several Motorola 68020 processors and two dual redundant databusses. A new hardware was introduced due to obsolescence problems. To reduce risk the legacy software should be migrated to the new hardware with possibly no changes. Therefore a software stack with standardised software interfaces according to IMA concepts was introduced that provides additionally the software interfaces required by the legacy software on top of IMA. On the original mission computer the software accesses the databusses directly via memory mapped I/O. This is no longer possible with a layered software architecture. With the implemented IMA software stack I/O is transmitted to a dedicated module via VME backplane. Calls to the hardware specific I/O drivers are handled on that module and responses replied back to the application software. The paper presents the results of timing and performance measurements with both the legacy and new software architectures on the respective target hardware.The points that need special interest when specifying the hardware, supplier provided software and when implementing the IMA software stack are discussed

    On the error probability of general tree and trellis codes with applications to sequential decoding

    Get PDF
    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding

    SimpleSSD: Modeling Solid State Drives for Holistic System Simulation

    Full text link
    Existing solid state drive (SSD) simulators unfortunately lack hardware and/or software architecture models. Consequently, they are far from capturing the critical features of contemporary SSD devices. More importantly, while the performance of modern systems that adopt SSDs can vary based on their numerous internal design parameters and storage-level configurations, a full system simulation with traditional SSD models often requires unreasonably long runtimes and excessive computational resources. In this work, we propose SimpleSSD, a highfidelity simulator that models all detailed characteristics of hardware and software, while simplifying the nondescript features of storage internals. In contrast to existing SSD simulators, SimpleSSD can easily be integrated into publicly-available full system simulators. In addition, it can accommodate a complete storage stack and evaluate the performance of SSDs along with diverse memory technologies and microarchitectures. Thus, it facilitates simulations that explore the full design space at different levels of system abstraction.Comment: This paper has been accepted at IEEE Computer Architecture Letters (CAL

    Common data buffer system

    Get PDF
    A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer

    Divided we stand: Parallel distributed stack memory management

    Get PDF
    We present an overview of the stack-based memory management techniques that we used in our non-deterministic and-parallel Prolog systems: &-Prolog and DASWAM. We believe that the problems associated with non-deterministic and-parallel systems are more general than those encountered in or-parallel and deterministic and-parallel systems, which can be seen as subsets of this more general case. We develop on the previously proposed "marker scheme", lifting some of the restrictions associated with the selection of goals while keeping (virtual) memory consumption down. We also review some of the other problems associated with the stack-based management scheme, such as handling of forward and backward execution, cut, and roll-backs

    Relating goal scheduling, precedence, and memory management in and-parallel execution of logic programs

    Full text link
    The interactions among three important issues involved in the implementation of logic programs in parallel (goal scheduling, precedence, and memory management) are discussed. A simplified, parallel memory management model and an efficient, load-balancing goal scheduling strategy are presented. It is shown how, for systems which support "don't know" non-determinism, special care has to be taken during goal scheduling if the space recovery characteristics of sequential systems are to be preserved. A solution based on selecting only "newer" goals for execution is described, and an algorithm is proposed for efficiently maintaining and determining precedence relationships and variable ages across parallel goals. It is argued that the proposed schemes and algorithms make it possible to extend the storage performance of sequential systems to parallel execution without the considerable overhead previously associated with it. The results are applicable to a wide class of parallel and coroutining systems, and they represent an efficient alternative to "all heap" or "spaghetti stack" allocation models
    • …
    corecore