424 research outputs found

    The Legacy of Multics and Secure Operating Systems Today

    Full text link
    This paper looks to the legacy of Multics from 1963 and its influence on computer security. It discusses kernel-based and virtualization-based containment in projects like SELinux and Qubes, respectively. The paper notes the importance of collaborative and research-driven projects like Qubes and Tor Project

    50 years of isolation

    Get PDF
    The traditional means for isolating applications from each other is via the use of operating system provided “process” abstraction facilities. However, as applications now consist of multiple fine-grained components, the traditional process abstraction model is proving to be insufficient in ensuring this isolation. Statistics indicate that a high percentage of software failure occurs due to propagation of component failures. These observations are further bolstered by the attempts by modern Internet browser application developers, for example, to adopt multi-process architectures in order to increase robustness. Therefore, a fresh look at the available options for isolating program components is necessary and this paper provides an overview of previous and current research on the area

    Major Trends in Operating Systems Development

    Get PDF
    Operating systems have changed in nature in response to demands of users, and in response to advances in hardware and software technology. The purpose of this paper is to trace the development of major themes in operating system design from their beginnings through the present. This is not an exhaustive history of operating systems, but instead is intended to give the reader the flavor of the dif ferent periods in operating systems\u27 development. To this end, the paper will be organized by topic in approximate order of development. Each chapter will start with an introduction to the factors behind the rise of the period. This will be fol lowed by a survey of the state-of-the-art systems, and the conditions influencing them. The chapters close with a summation of the significant hardware and software contributions from the period

    On The Hourglass Model, The End-to-End Principle and Deployment Scalability

    Get PDF
    The hourglass model is a widely used as a means of describing the design of the Internet, and can be found in the introduction of many modern textbooks. It arguably also applies to the design of other successful spanning layers, notably the Unix operating system kernel interface, meaning the primitive system calls and the interactions between user processes and the kernel. The impressive success of the Internet has led to a wider interest in using the hourglass model in other layered systems, with the goal of achieving similar results. However, application of the hourglass model has often led to controversy, perhaps in part because the language in which it has been expressed has been informal, and arguments for its validity have not been precise. Making a start on formalizing such an argument is the goal of this paper

    The Use of UNIX in a Real-Time Environment

    Get PDF
    This paper describes a project to evaluate the feasibility of using commercial off the shelf hardware and the UNIX (trademark of AT&T Bell Laboratories) operating system, to implement a real time control and monitor system. A functional subset of the Checkout, Control and Monitor System (CCMS) was chosen as the testbed for the project. The project consists of three separate architecture implementations. A local area bus network, a star network, and a central host. The motivation for this project stemmed from the need to find a way to implement real-time systems, without the cost burden of developing and maintaining custom hardware and unique software. This has always been accepted as the only option because of the need to optimize the implementation for performance. However, with the cost/performance of today\u27s hardware, the inefficiencies of high-level languages and portable operating systems can be effectively overcome

    Garbage Collection in a Very Large Address Space

    Get PDF
    This research was done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology and was supported by the Office of Naval Research under contract number N00014-75-C-0522.The address space is broken into areas that can be garbage collected separately. An area is analogous to a file on current systems. Each process has a local computation area for its stack and temporary storage that is roughly analogous to a job core image. A mechanism is introduced for maintaining lists of inter-area links, the key to separate garbage collection. This mechanism is designed to be placed in hardware and does not create much overhead. It could be used in a practical computer system that uses the same address space for all users for the life of the system. It is necessary for the hardware to implement a reference count scheme that is adequate for handling stack frames. The hardware also facilitates implementation of protection by capabilities without the use of unique codes. This is due to elimination of dangling references. Areas can be deleted without creating dangling references.MIT Artificial Intelligence Laboratory Department of Defense Office of Naval Researc
    • …
    corecore