10 research outputs found

    File Access Performance of Diskless Workstations

    Get PDF
    This paper studies the performance of single-user workstations that access files remotely over a local area network. From the environmental, economic, and administrative points of view, workstations that are diskless or that have limited secondary storage are desirable at the present time. Even with changing technology, access to shared data will continue to be important. It is likely that some performance penalty must be paid for remote rather than local file access. Our objectives are to assess this penalty and to explore a number of design alternatives that can serve to minimize it. Our approach is to use the results of measurement experiments to parameterize queuing network performance models. These models then are used to assess performance under load and to evahrate design alternatives. The major conclusions of our study are: (1) A system of diskless workstations with a shared file server can have satisfactory performance. By this, we mean performance comparable to that of a local disk in the lightly loaded case, and the ability to support substantial numbers of client workstations without significant degradation. As with any shared facility, good design is necessary to minimize queuing delays under high load. (2) The key to efficiency is protocols that allow volume transfers at every interface (e.g., between client and server, and between disk and memory at the server) and at every level (e.g., between client and server at the level of logical request/response and at the level of local area network packet size). However, the benefits of volume transfers are limited to moderate sizes (8-16 kbytes) by several factors. (3) From a performance point of view, augmenting the capabilities of the shared file server may be more cost effective than augmenting the capabilities of the client workstations. (4) Network contention should not be a performance problem for a lo-Mbit network and 100 active workstations in a software development environment

    On the Performance of Copying Large Files Across a Contention-Based Network

    Full text link
    Analytical and simulation models of interconnected local area networks, because of the large scale involved, are often constrained to represent only the most ideal of conditions for tractability sake. Consequently, many of the important causes of network delay are not accounted for. In this study, experimental evidence is presented to show how delay time in local area networks is significantly affected by hardware limitations in the connected workstations, software overhead, and network contention. The mechanism is a controlled experiment with two Vax workstations over an Ethernet. We investigate the network delays for large file transfers, taking into account the Vax workstation disk transfer limitations; generalized file transfer software such as NFS, FTP, and rcp; and the effect of contention on this simple network by the introduction of substantial workload from competing workstations. A comparison is made between the experimental data and a network modeling tool, and the limitations of the tool are explained. Insights from these experiments have increased our understanding of how more complex networks are likely to perform under heavy workloads.http://deepblue.lib.umich.edu/bitstream/2027.42/107873/1/citi-tr-89-3.pd

    Deceit: A flexible distributed file system

    Get PDF
    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness

    Dynamic Load-Sharing Using Predicted Process Resource Requirements

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Aeronautics and Space Administration / NASA NAG-1-61

    Protocols for Large Data Transfers over Local Networks

    Get PDF
    Protocols for transmitting large amounts of data over a local area network are analyzed. These protocols are different from most other forms of large-scale transfer protocols in three ways: the definition of the protocol requires the recipient to have sufficient buffers available to receive the data before the transfer takes place; it is assumed that the source and the destination machine are more or less matched in speed; and the protocol is implemented at the network interrupt level and therefore not slowed down by process scheduling delays. The results are based on measurements collected on SUN workstations connected to a 10-Mb Ethernet network using 3-Com interfaces. The derivation of the elapsed time in terms of the network packet error rate is based on the assumption of statistically independent errors

    Resource sharing across heterogenous networks

    Get PDF
    Sharing resources on a computer network, especially in heterogeneous environments, has m any benefits: new applications become possible, and use of technology cheaper. This dissertation investigates how resources— in particular printing resources—may b e shared. While still incomplete, an existing theoretical framework for data communication and resource sharing, the ISO-051 Reference Model, provides useful background information and tools for analysis. A discussion o f this framework complements a survey o f the principles and current state of file and printer servers, and distributed systems. An analysis of the design and implementation of a printer server acting as a b ridge between two networks illustrates problem s and results found in distributed system s generally. The dissertation concludes by analyzing the strengths and shortcomings of the Reference Model and distributed systems. This and developments in technology lead to a proposal of an extended model for printer services, and clarification of printer servers' needs and requirements

    Redundant disk arrays: Reliable, parallel secondary storage

    Get PDF
    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures

    Performance studies of file system design choices for two concurrent processing paradigms

    Get PDF

    From online social network analysis to a user-centric private sharing system

    Get PDF
    Online social networks (OSNs) have become a massive repository of data constructed from individuals’ inputs: posts, photos, feedbacks, locations, etc. By analyzing such data, meaningful knowledge is generated that can affect individuals’ beliefs, desires, happiness and choices—a data circulation started from individuals and ended in individuals! The OSN owners, as the one authority having full control over the stored data, make the data available for research, advertisement and other purposes. However, the individuals are missed in this circle while they generate the data and shape the OSN structure. In this thesis, we started by introducing approximation algorithms for finding the most influential individuals in a social graph and modeling the spread of information. To do so, we considered the communities of individuals that are shaped in a social graph. The social graph is extracted from the data stored and controlled centrally, which can cause privacy breaches and lead to individuals’ concerns. Therefore, we introduced UPSS: the user-centric private sharing system, in which the individuals are considered as the real data owners and provides secure and private data sharing on untrusted servers. The UPSS’s public API allows the application developers to implement applications as diverse as OSNs, document redaction systems with integrity properties, censorship-resistant systems, health care auditing systems, distributed version control systems with flexible access controls and a filesystem in userspace. Accessing users’ data is possible only with explicit user consent. We implemented the two later cases to show the applicability of UPSS. Supporting different storage models by UPSS enables us to have a local, remote and global filesystem in userspace with one unique core filesystem implementation and having it mounted with different block stores. By designing and implementing UPSS, we show that security and privacy can be addressed at the same time in the systems that need selective, secure and collaborative information sharing without requiring complete trust

    File Access Performance of Diskless Workstations

    No full text
    This paper studies the performance of single-user workstations that access files remotely over a local area network. From the environmental, economic, and administrative points of view, workstations that are diskless or that have limited secondary storage are desirable at the present time. Even with changing technology, access to shared data will continue to be important. It is likely that some performance penalty must be paid for remote rather than local file access. Our objectives are to assess this penalty and to explore a number of design alternatives that can serve to minimize it. Our approach is to use the results of measurement experiments to parameterize queuing network performance models. These models then are used to assess performance under load and to evahrate design alternatives. The major conclusions of our study are: (1) A system of diskless workstations with a shared file server can have satisfactory performance. By this, we mean performance comparable to that of a local disk in the lightly loaded case, and the ability to support substantial numbers of client workstations without significant degradation. As with any shared facility, good design is necessary to minimize queuing delays under high load. (2) The key to efficiency is protocols that allow volume transfers at every interface (e.g., between client and server, and between disk and memory at the server) and at every level (e.g., between client and server at the level of logica
    corecore