4,697,933 research outputs found

    Deceit: A flexible distributed file system

    Get PDF
    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness

    Operating-system support for distributed multimedia

    Get PDF
    Multimedia applications place new demands upon processors, networks and operating systems. While some network designers, through ATM for example, have considered revolutionary approaches to supporting multimedia, the same cannot be said for operating systems designers. Most work is evolutionary in nature, attempting to identify additional features that can be added to existing systems to support multimedia. Here we describe the Pegasus project's attempt to build an integrated hardware and operating system environment from\ud the ground up specifically targeted towards multimedia

    Distributed Virtual System (DIVIRS) Project

    Get PDF
    As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed

    The Performability Manager

    Get PDF
    The authors describe the performability manager, a distributed system component that contributes to a more effective and efficient use of system components and prevents quality of service (QoS) degradation. The performability manager dynamically reconfigures distributed systems whenever needed, to recover from failures and to permit the system to evolve over time and include new functionality. Large systems require dynamic reconfiguration to support dynamic change without shutting down the complete system. A distributed system monitor is needed to verify QoS. Monitoring a distributed system is difficult because of synchronization problems and minor differences in clock speeds. The authors describe the functionality and the operation of the performability manager (both informally and formally). Throughout the paper they illustrate the approach by an example distributed application: an ANSAware-based number translation service (NTS), from the intelligent networks (IN) area

    H2O: An Autonomic, Resource-Aware Distributed Database System

    Get PDF
    This paper presents the design of an autonomic, resource-aware distributed database which enables data to be backed up and shared without complex manual administration. The database, H2O, is designed to make use of unused resources on workstation machines. Creating and maintaining highly-available, replicated database systems can be difficult for untrained users, and costly for IT departments. H2O reduces the need for manual administration by autonomically replicating data and load-balancing across machines in an enterprise. Provisioning hardware to run a database system can be unnecessarily costly as most organizations already possess large quantities of idle resources in workstation machines. H2O is designed to utilize this unused capacity by using resource availability information to place data and plan queries over workstation machines that are already being used for other tasks. This paper discusses the requirements for such a system and presents the design and implementation of H2O.Comment: Presented at SICSA PhD Conference 2010 (http://www.sicsaconf.org/

    Policy Conflict Analysis in Distributed System Management

    Get PDF
    Accepted versio

    Unifying Distributed Processing and Open Hypertext through a Heterogeneous Communication Model

    No full text
    A successful distributed open hypermedia system can be characterised by a scaleable architecture which is inherently distributed. While the architects of distributed hypermedia systems have addressed the issues of providing and retrieving distributed resources, they have often neglected to design systems with the inherent capability to exploit the distributed processing of this information. The research presented in this paper describes the construction and use of an open hypermedia system concerned equally with both of these facets
    corecore