101 research outputs found

    Modal Abstractions for Virtualizing Memory Addresses

    Full text link
    Operating system kernels employ virtual memory management (VMM) subsystems to virtualize the addresses of memory regions in order to to isolate untrusted processes, ensure process isolation and implement demand-paging and copy-on-write behaviors for performance and resource controls. Bugs in these systems can lead to kernel crashes. VMM code is a critical piece of general-purpose OS kernels, but their verification is challenging due to the hardware interface (mappings are updated via writes to memory locations, using addresses which are themselves virtualized). Prior work on VMM verification has either only handled a single address space, trusted significant pieces of assembly code, or resorted to direct reasoning over machine semantics rather than exposing a clean logical interface. In this paper, we introduce a modal abstraction to describe the truth of assertions relative to a specific virtual address space, allowing different address spaces to refer to each other, and enabling verification of instruction sequences manipulating multiple address spaces. Using them effectively requires working with other assertions, such as points-to assertions in our separation logic, as relative to a given address space. We therefore define virtual points-to assertions, which mimic hardware address translation, relative to a page table root. We demonstrate our approach with challenging fragments of VMM code showing that our approach handles examples beyond what prior work can address, including reasoning about a sequence of instructions as it changes address spaces. All definitions and theorems mentioned in this paper including the operational model of a RISC-like fragment of supervisor-mode x86-64, and a logic as an instantiation of the Iris framework, are mechanized inside Coq

    Do You Want to Build with Snowman? : Positioning Twine Story Formats Through Critical Code Study

    Get PDF
    Using critical code studies, this dissertation examines the Twine story format Snowman. Despite existing books on the authoring tool Twine, a central part of its functionality, what it names story formats, is rarely covered. This study steps into this gap and, based on my own experiences through working on story formats and documenting examples using Twine, explores the greater social context of the story format Snowman through examining its source code. This dissertation consists of three chapters, each using a different set of research methods. First, the metaphor of a stack is used to better understand how software like Snowman is based on a past of other, older concepts and functionality. Second, the concept of a network is applied to better understand how software projects often rely on relationships of trust and hidden labor. Third, two other story formats, which are based on Snowman, are compared through first using a distant reading approach to find structures and then a closer reading to review how they are different. This research presents not only a greater emphasis on story formats missing from existing scholarship but also positions the story format Snowman as an important, but often overlooked, part of Twine\u27s history

    System noise, OS clock ticks, and fine-grained parallel applications

    Full text link
    As parallel jobs get bigger in size and finer in granularity, “system noise ” is increasingly becoming a problem. In fact, fine-grained jobs on clusters with thousands of SMP nodes run faster if a processor is intentionally left idle (per node), thus enabling a separation of “system noise ” from the com-putation. Paying a cost in average processing speed at a node for the sake of eliminating occasional processes delays is (unfortunately) beneficial, as such delays are enormously magnified when one late process holds up thousands of peers with which it synchronizes. We provide a probabilistic argument showing that, under certain conditions, the effect of such noise is linearly pro-portional to the size of the cluster (as is often empirically observed). We then identify a major source of noise to be indirect overhead of periodic OS clock interrupts (“ticks”), that are used by all general-purpose OSs as a means of main-taining control. This is shown for various grain sizes, plat-forms, tick frequencies, and OSs. To eliminate such noise, we suggest replacing ticks with an alternative mechanism we call “smart timers”. This turns out to also be in line with needs of desktop and mobile computing, increasing the chances of the suggested change to be accepted. 1

    A UNIFIED HARDWARE/SOFTWARE PRIORITY SCHEDULING MODEL FOR GENERAL PURPOSE SYSTEMS

    Get PDF
    Migrating functionality from software to hardware has historically held the promise of enhancing performance through exploiting the inherent parallel nature of hardware. Many early exploratory efforts in repartitioning traditional software based services into hardware were hampered by expensive ASIC development costs. Recent advancements in FPGA technology have made it more economically feasible to explore migrating functionality across the hardware/software boundary. The flexibility of the FPGA fabric and availability of configurable soft IP components has opened the potential to rapidly and economically investigate different hardware/software partitions. Within the real time operating systems community, there has been continued interest in applying hardware/software co-design approaches to address scheduling issues such as latency and jitter. Many hardware based approaches have been reported to reduce the latency of computing the scheduling decision function itself. However continued adherence to classic scheduler invocation mechanisms can still allow variable latencies to creep into the time taken to make the scheduling decision, and ultimately into application timelines. This dissertation explores how hardware/software co-design can be applied past the scheduling decision itself to also reduce the non-predictable delays associated with interrupts and timers. By expanding the window of hardware/software co-design to these invocation mechanisms, we seek to understand if the jitter introduced by classical hardware/software partitionings can be removed from the timelines of critical real time user processes. This dissertation makes a case for resetting the classic boundaries of software thread level scheduling, software timers, hardware timers and interrupts. We show that reworking the boundaries of the scheduling invocation mechanisms helps to rectify the current imbalance of traditional hardware invocation mechanisms (timers and interrupts) and software scheduling policy (operating system scheduler). We re-factor these mechanisms into a unified hardware software priority scheduling model to facilitate improvements in performance, timeliness and determinism in all domains of computing. This dissertation demonstrates and prototypes the creation of a new framework that effects this basic policy change. The advantage of this approach lies within it's ability to unify, simplify and allow for more control within the operating systems scheduling policy

    ���������������� � Graphics Programming in Icon

    Get PDF
    ~ COMMUNICATIONSThis book originally was published by Peer-to-Peer Communications. It is out ofprint and the rights have reverted to the authors, who hereby place it in the public domain. Publisher's Cataloging-in-Publication (Provided by Quality Books, Inc.

    The Tiger Vol. 91 Issue 2 1997-09-05

    Get PDF
    https://tigerprints.clemson.edu/tiger_newspaper/2931/thumbnail.jp

    Shards: a system for systems

    Get PDF
    Operating system construction is often focused on the internal operation and architecture of a general purpose system. This thesis instead focuses on systems built in response to a specific purpose, design intent, application load and platform. These are referred to as custom systems in the thesis. These focused systems have known demands, constraints and requirements that provide a target for system design and optimisation. These systems can perform valuable and demanding tasks which may encourage optimisation effort. The first challenge is discovering and capturing these attributes in an encoding that can be machine manipulated. The second challenge was to use this information in a way which makes custom system construction economical, thereby widening the range of systems for which such efforts are appropriate. A bespoke and manual system construction is too expensive for the more narrowly deployed systems being considered. The operating systems field generally assumes a long lived and widely deployed general system which can afford significant design effort up-front which is not applicable in this case. The proposed solution was to balance the advantages of modular functionality with automated configuration, construction and tailoring based on the captured demands of the proposed system. Effectively the operating system is compiled as an integrated part of the system. In such an approach new inputs not relevant to general systems, such as application code and design intent, are known in advance and can inform the system generation process. This leads to an operating system structure that is determined by and optimised to the needs of the proposed system. A clean architecture is often a design goal for system construction. In this case the ideal is an operating system so integrated into the overall system there is no clearly identifiable run time structure. The Operating System could become part of the hardware, system operation or applications of the system. The final goal was to build a foundation in which construction work or advances can be captured and reused. Building a complete "system of systems" in a single project would be an impractical undertaking. The effort was to build an approach and framework which could grow as a side effect of its use and application. This allowed the lessons learnt and work done in one project to potentially enrich both this approach and the domain of operating systems

    Music analysis and the computer: developing a computer operating system to analyse music, using Johann Sebastian Bach's "well tempered clavier" book 51 to test the methodology

    Get PDF
    "Most computerised and computer-aided musicological projects are written to achieve specific goals. Once achieved or not achieved as the case may be, the projects and their tools are frequently discarded because their dependency upon specific computer hardware and software prevents them from being utilised by other researchers for other projects. What is needed is a system that, using small tools to accomplish small tasks, can be expanded and customized to suit specific needs. This thesis proposes the creation of a music-analysis computer operating system that contains simple commands to perform simple musicological tasks such as the removal of repeated notes from a score or the audible rendition of a melodic line. The tools can be bolted together to form larger tools that perform larger tasks. New tools can be created and added to the operating system with relative ease, and these in turn can be bolted onto old tools. The thesis suggests a basic set of tools derived from old and new analytical methods, proposes a standard for their implementation based on the UNIX computer operating system, and discusses the benefits of using the system and its tools in an analysis of the twenty-four fugues of Johann Sebastian Bach from the "Well Tempered Clavier", Book II.

    Maine Campus March 29 2006

    Get PDF
    corecore