1,138 research outputs found

    DRAFT: work in progress - - - comments solicited evolving Mach 3.0 to use migrating threads

    Get PDF
    technical reportLike most operating systems, Mach 3.0 views threads as statically associated with a single task. An alternative model is that of migrating threads, in which a single thread abstraction moves between tasks with the logical flow of control, and "server" code is passively executed. We have compatibly replaced Mach's static threads with migrating threads, isolating that aspect of operating system design and implementation. The key element of our design is a decoupling of the thread abstraction into the controllable execution context and the schedulable thread of control, consisting of a chain of contexts. A key element of our implementation is that threads are now "based" in the kernel, and temporarily make excursions into tasks via upcalls. The new system provides cleaner and more powerful semantics for thread manipulation, allows scheduling and accounting attributes to follow threads, simplifies both kernel and server code, and improves RPC performance. We have retained the old thread and IPC interfaces for backwards compatibility, with no changes required to existing client programs and only a minimal change to servers, as demonstrated by a functional Unix single server and clients. Code size along the critical RPC path has been reduced by a factor of three, while its logical complexity has been reduced by an order of magnitude. Initial timings show that the performance of local RPC, doing normal marshaling, has also improved by a factor of three. We conclude that a migrating thread model is superior to a static model, and that it is feasible to improve existing operating systems in this manner

    Notes on thread models in Mach 3.0

    Get PDF
    Journal ArticleDuring the Mach In-Kernel Servers work, we explored two alternate thread models that could be used to support traps to in-kernel servers. In the "migrating threads" model we used, the client's thread temporarily moves into the server's task for the duration of the call. In t h e "thread switching" model, an actual server thread is dispatched to handle client traps. Based on our experience, we find that the migrating threads model is quite complex and difficult to implement in t h e context of the current design of Mach and the Unix single server. The thread switching model would fit more naturally and would probably be much simpler and more robust than migrating threads, making it a valuable approach to explore in the near future. However, we believe migrating threads inherently to be faster than thread switching, and ultimately to be the best long term direction

    The Amoeba Distributed Operating System - A Status Report

    Get PDF
    As the price of CPU chips continues to fall rapidly, it will soon be economically feasible to build computer systems containing a large number of processors. The question of how this computing power should be organized, and what kind of operating system is appropriate then arises. Our research during the past decade has focused on these issues and led to the design of a distributed operating system, called Amoeba, that is intended for systems with large numbers of computers. In this paper we describe Amoeba, its philosophy, its design, its applications, and some experience with it. 1

    Using annotated interface definitions to optimize RPC

    Get PDF
    Journal ArticleIn RPC-based communication, it is useful to distinguish the RPC interface, which is the "network contract" between the client and the server, from the presentation, which is the "programmer's contract" between the RPC stubs and the code that calls or is called by them. Presentation is usually a fixed function of the RPC interface, but some RPC systems, such as DCE and Concert, support the notion of a flexible presentation or endpoint modifier, allowing controlled modification of the behavior of the stubs on each side without affecting the contract between the client and the server. Up until now, the primary motivation for flexible presentation has been for programmer convenience and improved interoperability. However, we have found flexible presentation also to be useful for optimization of RPC, and in many cases necessary to achieving maximal performance without throwing out the RPC system and resorting to hand-coded stubs. In this paper we provide examples demonstrating this point for a number of different operating systems and IPC transport mechanisms, with RPC performance improvements ranging from 5% to an order of magnitude. In general, we observe that the more efficient the underlying IPC transport mechanism is, the more important it is for the RPC system to support flexible presentation, in order to avoid unnecessary user-space overhead?

    Separating presentation from interface in RPC and IDLs

    Get PDF
    Journal ArticleIn RPC-based communication, we term the interface the set of remote procedures and the types of their arguments; the presentation is the way these procedures and types are mapped to the target language environment in a particular client or server, including semantic requirements. For example, presentation includes the local names assigned to RPC stubs, the physical representation of a logical block of data (e.g., in-line, out-of-line, linked blocks), and trust requirements (e.g., integrity, security). In existing systems, the presentation of a given RPC construct is largely fixed. Separating presentation from interface, both in the interface definition language (IDL) itself and in the RPC implementation, is the key to interoperability, with many benefits in the area of elegance, as well. This separation and resulting cleanliness makes it manageable to generate specialized kernel code paths for each type of client-server pair. This is a key element o/end-to-end optimization. The separation should also allow the integration of disparate RPC optimization techniques, such as those applied in LRPC[2] and fbufs[6], into a single system, in a uniform and fully interoperable way. In initial work we demonstrate a variant of threaded code generation and two presentation-based optimizations, transparently activated by the RPC system. Each of these optimizations speeds up local RPC by approximately 25%.

    Program specialization using the OMOS system

    Get PDF
    technical reportAbstraction and modularity provide many software engineering benefits. Hiding details of module internals can, however, prevent system implementors from being able to provide anything but a highly general implementation of a given module. We describe OMOS, a programmable linker/loader and system server that manages module implementations. OMOS allows system builders to describe system architectures in high-level terms, via a module construction scripting language. Using scripts, system implementors can provide modules that can test and react to both their static and run time environments. These modules, which we refer to as electric libraries, can produce implementations that are optimized at link or run time, without sacrificing modularity, expanding interfaces, or requiring changes in client programs. We identify and implement three types of specializations that OMOS can perform, and quantify the impact of two of them on a few standard Unix utilities: performance improvements ranged from 6% to 47%.

    Operating-system support for distributed multimedia

    Get PDF
    Multimedia applications place new demands upon processors, networks and operating systems. While some network designers, through ATM for example, have considered revolutionary approaches to supporting multimedia, the same cannot be said for operating systems designers. Most work is evolutionary in nature, attempting to identify additional features that can be added to existing systems to support multimedia. Here we describe the Pegasus project's attempt to build an integrated hardware and operating system environment from\ud the ground up specifically targeted towards multimedia

    Software management techniques for translation lookaside buffers

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (leaves 67-70).by Kavita Bala.M.S

    Machine learning approaches for epitope prediction

    Get PDF
    The identification and characterization of epitopes in antigenic sequences is critical for understanding disease pathogenesis, for identifying potential autoantigens, and for designing vaccines and immune-based cancer therapies. As the number of pathogen genomes fully or partially sequenced is rapidly increasing, experimental methods for epitope mapping would be prohibitive in terms of time and expenses. Therefore, computational methods for reliably identifying potential vaccine candidates (i.e., epitopes that invoke strong response from both T-cells and B-cells) are highly desirable. Machine learning offers one of the most cost-effective and widely used approaches to developing epitope prediction tools. In the last few years, several advances in machine learning research have emerged. We utilize recent advances in machine learning research to provide epitope prediction tools with improved predictive performance. First, we introduce two methods, BCPred and FBCPred, for predicting linear B-cell epitopes and flexible length linear B-cell epitopes, respectively, using string kernel based support vector machine (SVM) classifiers. Second, we introduce three scoring matrix methods and show that they are highly competitive with a broad class of machine learning methods, including SVM, in predicting major histocompatibility complex class I (MHC-I) binding peptides. Finally, we formulate the problems of qualitatively and quantitatively predicting flexible length major histocompatibility complex class II (MHC-II) peptides as multiple instance learning and multiple instance regression problems, respectively. Based on this formulation, we introduce MHCMIR, a novel method for predicting MHC-II binding affinity using multiple instance regression. The development of reliable epitope prediction tools is not feasible in the absence of high quality data sets. Unfortunately, most of the existing epitope benchmark data sets are comprised of epitope sequences that share high degree of similarity with other peptide sequences in the same data set. We demonstrate the pitfalls of these commonly used data sets for evaluating the performance of machine learning approaches to epitope prediction. Finally, we propose a similarity reduction procedure that is more stringent than currently used similarity reduction methods
    • ā€¦
    corecore