14 research outputs found

    Precomputing Memory Locations for Parametric Allocations

    Get PDF
    Current worst-case execution time (WCET) analyses do not support programs using dynamic memory allocation. This is mainly due to the unpredictability of cache performance introduced by standard memory allocators. To overcome this problem, algorithms have been proposed that precompute static allocations for dynamically allocating programs with known numeric bounds on the number and sizes of allocated memory blocks. In this paper, we present a novel algorithm for computing such static allocations that can cope with parametric bounds on the number and sizes of allocated blocks. To demonstrate the usefulness of our approach, we precompute static allocations or a set of existing real-time applications and academic examples

    Automated Object Layout Optimization in a Portable Microkernel

    Get PDF

    Remote Memory References at Block Granularity

    Get PDF

    Efficient parameterized algorithms for data packing

    Get PDF
    There is a huge gap between the speeds of modern caches and main memories, and therefore cache misses account for a considerable loss of efficiency in programs. The predominant technique to address this issue has been Data Packing: data elements that are frequently accessed within time proximity are packed into the same cache block, thereby minimizing accesses to the main memory. We consider the algorithmic problem of Data Packing on a two-level memory system. Given a reference sequence R of accesses to data elements, the task is to partition the elements into cache blocks such that the number of cache misses on R is minimized. The problem is notoriously difficult: it is NP-hard even when the cache has size 1, and is hard to approximate for any cache size larger than 4. Therefore, all existing techniques for Data Packing are based on heuristics and lack theoretical guarantees. In this work, we present the first positive theoretical results for Data Packing, along with new and stronger negative results. We consider the problem under the lens of the underlying access hypergraphs, which are hypergraphs of affinities between the data elements, where the order of an access hypergraph corresponds to the size of the affinity group. We study the problem parameterized by the treewidth of access hypergraphs, which is a standard notion in graph theory to measure the closeness of a graph to a tree. Our main results are as follows: We show there is a number q* depending on the cache parameters such that (a) if the access hypergraph of order q* has constant treewidth, then there is a linear-time algorithm for Data Packing; (b)the Data Packing problem remains NP-hard even if the access hypergraph of order q*-1 has constant treewidth. Thus, we establish a fine-grained dichotomy depending on a single parameter, namely, the highest order among access hypegraphs that have constant treewidth; and establish the optimal value q* of this parameter. Finally, we present an experimental evaluation of a prototype implementation of our algorithm. Our results demonstrate that, in practice, access hypergraphs of many commonly-used algorithms have small treewidth. We compare our approach with several state-of-the-art heuristic-based algorithms and show that our algorithm leads to significantly fewer cache-misses

    Garbage Collection Algorithms

    Get PDF
    This thesis focuses on an implementation of automatic memory management in C programming language. Mark-sweep method was modified for use in uncooperative programming language, which does not share data type information of memory slots accessible by the mutator. Due to this fact, decisions on pointer identity are conservative which guarantees safe collector operation - if value looks sufficiently like a pointer, it is considered a pointer (although it might not actually be one). Mark bits were moved from object's headers to bitmaps, stored in a seperate part of memory to prevent accidental writes to user's data by the collector. Finally, the usage of garbage collector was evaluated in practice

    Cache-conscious Splitting of MapReduce Tasks and its Application to Stencil Computations

    Get PDF
    Modern cluster systems are typically composed by nodes with multiple processing units and memory hierarchies comprising multiple cache levels of various sizes. To leverage the full potential of these architectures it is necessary to explore concepts such as parallel programming and the layout of data onto the memory hierarchy. However, the inherent complexity of these concepts and the heterogeneity of the target architectures raises several challenges at application development and performance portability levels, respectively. In what concerns parallel programming, several model and frameworks are available, of which MapReduce [16] is one of the most popular. It was developed at Google [16] for the parallel and distributed processing of large amounts of data in large clusters of commodity machines. Although being very powerful tools, the reference MapReduce frameworks, such as Hadoop and Spark, do not leverage the characteristics of the underlying memory hierarchy. This shortcoming is particularly noticeable in computations that benefit from temporal locality, such as stencil computations. In this context, the goal of this thesis is to improve the performance of MapReduce computations that benefit from temporal locality. To that end we optimize the mapping of MapReduce computations in a machine’s cache memory hierarchy by applying cacheaware tiling techniques. We prototyped our solution on top of the framework Hadoop MapReduce, incorporating a cache-awareness in the splitting stage. To validate our solution and assess its benefits, we developed an API for expressing stencil computations on top the developed framework. The experimental results show that, for a typical stencil computation, our solution delivers an average speed-up of 1.77 while reaching a peek speed-up of 3.2. These findings allows us to conclude that cacheaware decomposition of MapReduce computations considerably boosts the execution of this class of MapReduce computations

    Timing-predictable memory allocation in hard real-time systems

    Get PDF
    For hard real-time applications, tight provable bounds on the application\u27s worst-case execution time must be derivable. Employing dynamic memory allocation, in general, significantly decreases an application\u27s timing predictability. In consequence, current hard real-time applications rely on static memory management. This thesis studies how the predictability issues of dynamic memory allocation can be overcome and dynamic memory allocation be enabled for hard real-time applications. We give a detailed analysis of the predictability challenges imposed on current state-of-the-art timing analyses by dynamic memory allocation. We propose two approaches to overcome these issues and enable dynamic memory allocation for hard real-time systems: automatically transforming dynamic into static allocation and using a novel, cache-aware and predictable memory allocator. Statically transforming dynamic into static memory allocation allows for very precise WCET bounds as all accessed memory addresses are completely known. However, this approach requires much information about the application\u27s allocation behavior to be available statically. For programs where a static precomputation of a suitable allocation scheme is not applicable, we investigate approaches to construct predictable dynamic memory allocators to replace the standard, general-purpose allocators in real-time applications. We present evaluations of the proposed approaches to evidence their practical applicability.Harte Echtzeitsysteme bedingen beweisbare obere Schranken bezüglich ihrer maximalen Laufzeit. Die Verwendung dynamischer Speicherverwaltung (DSV) innerhalb eine Anwendung verschlechtert deren Zeitvorhersagbarkeit im Allgemeinen erheblich. Folglich findet sich derzeit lediglich statische Speicherverwaltung in solchen Systemen. Diese Arbeit untersucht Wege, Probleme bezüglich der Vorhersagbar von Anwendungen, die aus dem Einsatz einer DSV resultieren, zu überbrücken. Aufbauend auf einer Analyse der Probleme, denen sich Zeitanalysen durch DSV konfrontiert sehen, erarbeiten wir zwei Lösungsansätze. Unser erster Ansatz verfolgt eine automatische Transformation einer gegebenen DSV in eine statische Verwaltung. Dieser Ansatz erfordert hinreichend genaue Information über Speicheranforderungen der Anwendung sowie die Lebenszyklen der angeforderten Speicherblöcke. Hinsichtlich Anwendungen, bei denen dieser erste Ansatz nicht anwendbar ist, untersuchen wir neuartige Algorithmen zur Implementierung vorhersagbarer Verfahren zur dynamischen Speicherverwaltung. Auf diesen Algorithmen basierende Speicherverwalter können die für Echtzeitsysteme ungeeigneten, allgemeinen Speicherverwalter bei Bedarf ersetzen. Wir belegen weiter die praktische Anwendbarkeit der von uns vorgeschlagenen Verfahren
    corecore