3 research outputs found

    A Comparison of Kernel Memory Protection for Docker Containers Across Host Operating Systems

    Get PDF
    Object oriented programming concepts have been widely adopted by the modern design of enterprise applications, which relies on heap memory mapping, and re-use of pre-coded class libraries. Computing resource sharing such as containerization, is a popular way to effectively reduce operation overhead by enlarging the scale of kernel accessibility among distributed computer systems. Thus, proper isolation between processes, containers and host operating systems is a critical task to assure system wide information security. This is a study designed to compare kernel level memory management and protection effectiveness for Docker container systems maintained on top of Ubuntu Linux and Microsoft Windows as the host operating system. Literature research aims to study the fundamentals of kernel memory management designs, policies and modules in place for enforcement. As well as container architectures based on the variation of the host operating systems. The experimental design focuses on whether the discovery of unauthorized access is possible between containers, kernel spaces and file systems. Research results are targeted to determine a better approach for securing Docker container system implementations and code deployment

    Near-Memory Address Translation

    Get PDF
    Virtual memory (VM) is a crucial abstraction in modern computer systems at any scale, from handheld devices to datacenters. VM provides programmers the illusion of an always sufficiently large and linear memory, making programming easier. Although the core components of VM have remained largely unchanged since early VM designs, the design constraints and usage patterns of VM have radically shifted from when it was invented. Today, computer systems integrate hundreds of gigabytes to a few terabytes of memory, while tightly integrated heterogeneous computing platforms (e.g., CPUs, GPUs, FPGAs) are becoming increasingly ubiquitous. As there is a clear trend towards extending the CPU's VM to all computing elements in the system for an efficient and easy to use programming model, the continuous demand for faster memory accesses calls for fast translations to terabytes of memory for any computing element in the system. Unfortunately, conventional translation mechanisms fall short of providing fast translations as contemporary memories exceed the reach of today's translation caches, such as TLBs. In this thesis, we provide fundamental insights into the reason why address translation sits on the critical path of accessing memory. We observe that the traditional fully associative flexibility to map any virtual page to any page frame precludes accessing memory before translating. We study the associativity in VM across a variety of scenarios by classifying page faults using the 3C model developed for caches. Our study demonstrates that the full associativity of VM is unnecessary, and only modest associativity is required. We conclude that capacity and compulsory misses---which are unaffected by associativity---dominate, while conflict misses rapidly disappear as the associativity of VM increases. Building on the modest associativity requirements, we propose a distributed memory management unit close to where the data resides to reduce or eliminate the TLB miss penalty

    Address Translation for Throughput-Oriented Accelerators

    No full text
    corecore