461,414 research outputs found

    DYNAMIC MEMORY MANAGEMENT WITH REDUCED FRAGMENTATION USING THE BEST-FIT APPROACH

    Get PDF
    This disclosure relates to the field of Dynamic memory management in general. Disclosed idea makes use of the Best fit approach which makes use of the balanced trees with nodes sorted based on key values corresponding to free memory portion sizes. Also disclosed is the method to efficiently coalesce the freed memory. This idea addresses the disadvantages of sequential search mechanism of finding the available free space and firstfit approach of memory management in the current flat memory-based allocators that are based on [1] approach. The current mechanism for dynamic memory management in use in most of the systems follows a sequential search for all the operations, this leads to a worst-case time complexity of O(N) and it follows the first-fit approach to allocate the first available free space for any request which leads to fragmentation issues

    A dynamic case-based planning system for space station application

    Get PDF
    We are currently investigating the use of a case-based reasoning approach to develop a dynamic planning system. The dynamic planning system (DPS) is designed to perform resource management, i.e., to efficiently schedule tasks both with and without failed components. This approach deviates from related work on scheduling and on planning in AI in several aspects. In particular, an attempt is made to equip the planner with an ability to cope with a changing environment by dynamic replanning, to handle resource constraints and feedback, and to achieve some robustness and autonomy through plan learning by dynamic memory techniques. We briefly describe the proposed architecture of DPS and its four major components: the PLANNER, the plan EXECUTOR, the dynamic REPLANNER, and the plan EVALUATOR. The planner, which is implemented in Smalltalk, is being evaluated for use in connection with the Space Station Mobile Service System (MSS)

    PRADA: Predictable Allocations by Deferred Actions

    Get PDF
    Modern hard real-time systems still employ static memory management. However, dynamic storage allocation (DSA) can improve the flexibility and readability of programs as well as drastically shorten their development times. But allocators introduce unpredictability that makes deriving tight bounds on an application\u27s worst-case execution time even more challenging. Especially their statically unpredictable influence on the cache, paired with zero knowledge about the cache set mapping of dynamically allocated objects leads to prohibitively large overestimations of execution times when dynamic memory allocation is employed. Recently, a cache-aware memory allocator, called CAMA, was proposed that gives strong guarantees about its cache influence and the cache set mapping of allocated objects. CAMA itself is rather complex due to its cache-aware implementations of split and merge operations. This paper proposes PRADA, a lighter but less general dynamic memory allocator with equally strong guarantees about its influence on the cache. We compare the memory consumption of PRADA and CAMA for a small set of real-time applications as well as synthetical (de-) allocation sequences to investigate whether a simpler approach to cache awareness is still sufficient for the current generation of real-time applications

    Memory Mangement in the PoSSo Solver

    Get PDF
    AbstractA uniform general purpose garbage collector may not always provide optimal performance. Sometimes an algorithm exhibits a predictable pattern of memory usage that could be exploited, delaying as much as possible the intervention of the collector. This requires a collector whose strategy can be customized to the need of an algorithm. We present a dynamic memory management framework which allows such customization, while preserving the convenience of automatic collection in the normal case. The Customizable Memory Management (CMM) organizes memory in multiple heaps, each one encapsulating a particular storage discipline. The default heap for collectable objects uses the technique of mostly copying garbage collection, providing good performance and memory compaction. Customization of the collector is achieved through object orientation by specialising the collector methods for each heap class. We describe how the CMM has been exploited in the implementation of the Buchberger algorithm, by using a special heap for temporary objects created during polynomial reduction. The solution drastically reduces the overall cost of memory allocation in the algorithm

    Dynamic Resource Management in a Static Network Operating System

    Get PDF
    We present novel approaches to managing three key resources in an event-driven sensornet OS: memory, energy, and peripherals. We describe the factors that necessitate using these new approaches rather than existing ones. A combination of static allocation and compile-time virtualization isolates resources from one another, while dynamic management provides the flexibility and sharing needed to minimize worst-case overheads. We evaluate the effectiveness and efficiency of these management policies in comparison to those of TinyOS 1.x, SOS, MOS, and Contiki. We show that by making memory, energy, and peripherals first-class abstractions, an OS can quickly, efficiently, and accurately adjust itself to the lowest possible power state, enable high performance applications when active, prevent memory corruption with little RAM overhead, and be flexible enough to support a broad range of devices and uses

    Power Aware Tuning of Dynamic Memory Management for Embedded Real-Time Multimedia Applications

    Get PDF
    In the near future, portable embedded devices must run multimedia applications with enormous computational requirements at low energy consumption. These applications demand extensive memory footprint and must rely on dynamic memory due to the unpredictability of input data (e.g. 3D streams features) and system behaviour (e.g. variable number of applications running concurrently). Within this context, the dynamic memory subsystem is one of the main sources of power consumption and embedded systems have very limited batteries to provide efficient general-purpose dynamic memory management. As a result, consistent design methodologies that can efficiently tackle the complex dynamic memory behaviour of these new applications for low power embedded systems are in great need. In this paper we propose a step-wise system-level approach that allows the design of platform-specific dynamic memory management mechanisms with low power consumption for such kind of dynamic applications. The experimental results in reallife case studies show that our approach improves power consumption up to 89% over current state-of-the-art dynamic memory managers for complex applications

    vCAT: Dynamic Cache Management Using CAT Virtualization

    Get PDF
    This paper presents vCAT, a novel design for dynamic shared cache management on multicore virtualization platforms based on Intel’s Cache Allocation Technology (CAT). Our design achieves strong isolation at both task and VM levels through cache partition virtualization, which works in a similar way as memory virtualization, but has challenges that are unique to cache and CAT. To demonstrate the feasibility and benefits of our design, we provide a prototype implementation of vCAT, and we present an extensive set of microbenchmarks and performance evaluation results on the PARSEC benchmarks and synthetic workloads, for both static and dynamic allocations. The evaluation results show that (i) vCAT can be implemented with minimal overhead, (ii) it can be used to mitigate shared cache interference, which could have caused task WCET increased by up to 7.2 x, (iii) static management in vCAT can increase system utilization by up to 7 x compared to a system without cache management; and (iv) dynamic management substantially outperforms static management in terms of schedulable utilization (increase by up to 3 x in our multi-mode example use case)
    • …
    corecore