227 research outputs found

    Linux kernel compaction through cold code swapping

    Get PDF
    There is a growing trend to use general-purpose operating systems like Linux in embedded systems. Previous research focused on using compaction and specialization techniques to adapt a general-purpose OS to the memory-constrained environment, presented by most, embedded systems. However, there is still room for improvement: it has been shown that even after application of the aforementioned techniques more than 50% of the kernel code remains unexecuted under normal system operation. We introduce a new technique that reduces the Linux kernel code memory footprint, through on-demand code loading of infrequently executed code, for systems that support virtual memory. In this paper, we describe our general approach, and we study code placement algorithms to minimize the performance impact of the code loading. A code, size reduction of 68% is achieved, with a 2.2% execution speedup of the system-mode execution time, for a case study based on the MediaBench II benchmark suite

    Reductie van het geheugengebruik van besturingssysteemkernen Memory Footprint Reduction for Operating System Kernels

    Get PDF
    In ingebedde systemen is er vaak maar een beperkte hoeveelheid geheugen beschikbaar. Daarom wordt er veel aandacht besteed aan het produceren van compacte programma's voor deze systemen, en zijn er allerhande technieken ontwikkeld die automatisch het geheugengebruik van programma's kunnen verkleinen. Tot nu toe richtten die technieken zich voornamelijk op de toepassingssoftware die op het systeem draait, en werd het besturingssysteem over het hoofd gezien. In dit proefschrift worden een aantal technieken beschreven die het mogelijk maken om op een geautomatiseerde manier het geheugengebruik van een besturingssysteemkern gevoelig te verkleinen. Daarbij wordt in eerste instantie gebruik gemaakt van compactietransformaties tijdens het linken. Als we de hardware en software waaruit het systeem samengesteld is kennen, is het mogelijk om nog verdere reducties te bekomen. Daartoe wordt de kern gespecialiseerd voor een bepaalde hardware-software combinatie. Overbodige functionaliteit wordt opgespoord en uit de kern verwijderd, terwijl de resterende functionaliteit wordt aangepast aan de specifieke gebruikspatronen die uit de hardware en software kunnen afgeleid worden. Als laatste worden technieken voorgesteld die het mogelijk maken om weinig of niet uitgevoerde code (bijvoorbeeld code voor het afhandelen van slechts zeldzaam optredende foutcondities) uit het geheugen te verwijderen. Deze code wordt dan enkel ingeladen op het moment dat ze effectief nodig is. Voor ons testsysteem kunnen we met de gecombineerde technieken het geheugengebruik van een Linux 2.4 kern met meer dan 48% verminderen

    BinRec:Atack surface reduction through dynamic binary recovery

    Get PDF
    Compile-time specialization and feature pruning through static binary rewriting have been proposed repeatedly as techniques for reducing the attack surface of large programs, and for minimizing the trusted computing base. We propose a new approach to attack surface reduction: dynamic binary lifting and recompilation. We present BinRec, a binary recompilation framework that lifts binaries to a compiler-level intermediate representation (IR) to allow complex transformations on the captured code. After transformation, BinRec lowers the IR back to a "recovered" binary, which is semantically equivalent to the input binary, but has its unnecessary features removed. Unlike existing approaches, which are mostly based on static analysis and rewriting, our framework analyzes and lifts binaries dynamically. The crucial advantage is that we can not only observe the full program including all of its dependencies, but we can also determine which program features the end-user actually uses. We evaluate the correctness and performance of Bin-Rec, and show that our approach enables aggressive pruning of unwanted features in COTS binaries

    Graphulo Implementation of Server-Side Sparse Matrix Multiply in the Accumulo Database

    Full text link
    The Apache Accumulo database excels at distributed storage and indexing and is ideally suited for storing graph data. Many big data analytics compute on graph data and persist their results back to the database. These graph calculations are often best performed inside the database server. The GraphBLAS standard provides a compact and efficient basis for a wide range of graph applications through a small number of sparse matrix operations. In this article, we implement GraphBLAS sparse matrix multiplication server-side by leveraging Accumulo's native, high-performance iterators. We compare the mathematics and performance of inner and outer product implementations, and show how an outer product implementation achieves optimal performance near Accumulo's peak write rate. We offer our work as a core component to the Graphulo library that will deliver matrix math primitives for graph analytics within Accumulo.Comment: To be presented at IEEE HPEC 2015: http://www.ieee-hpec.org

    Securing Virtualized System via Active Protection

    Get PDF
    Virtualization is the predominant enabling technology of current cloud infrastructure

    Java for Cost Effective Embedded Real-Time Software

    Get PDF

    Fast and Lean Immutable Multi-Maps on the JVM based on Heterogeneous Hash-Array Mapped Tries

    Get PDF
    An immutable multi-map is a many-to-many thread-friendly map data structure with expected fast insert and lookup operations. This data structure is used for applications processing graphs or many-to-many relations as applied in static analysis of object-oriented systems. When processing such big data sets the memory overhead of the data structure encoding itself is a memory usage bottleneck. Motivated by reuse and type-safety, libraries for Java, Scala and Clojure typically implement immutable multi-maps by nesting sets as the values with the keys of a trie map. Like this, based on our measurements the expected byte overhead for a sparse multi-map per stored entry adds up to around 65B, which renders it unfeasible to compute with effectively on the JVM. In this paper we propose a general framework for Hash-Array Mapped Tries on the JVM which can store type-heterogeneous keys and values: a Heterogeneous Hash-Array Mapped Trie (HHAMT). Among other applications, this allows for a highly efficient multi-map encoding by (a) not reserving space for empty value sets and (b) inlining the values of singleton sets while maintaining a (c) type-safe API. We detail the necessary encoding and optimizations to mitigate the overhead of storing and retrieving heterogeneous data in a hash-trie. Furthermore, we evaluate HHAMT specifically for the application to multi-maps, comparing them to state-of-the-art encodings of multi-maps in Java, Scala and Clojure. We isolate key differences using microbenchmarks and validate the resulting conclusions on a real world case in static analysis. The new encoding brings the per key-value storage overhead down to 30B: a 2x improvement. With additional inlining of primitive values it reaches a 4x improvement
    • …
    corecore