4 research outputs found

    NFSv4 and High Performance File Systems: Positioning to Scale

    Full text link
    The avant-garde of high performance computing is building petabyte storage systems. At CITI, we are investigating the use of NFSv4 as a standard for fast and secure access to this data, both across a WAN and within a (potentially massive) cluster. An NFSv4 server manages much state information, which hampers exporting objects via multiple servers and allows the NFSv4 server to become a bottleneck as load increases. This paper introduces Parallel NFSv4, extending the NFSv4 protocol with a new server-to-server protocol and a new file description and location mechanism for increased scalability.http://deepblue.lib.umich.edu/bitstream/2027.42/107881/1/citi-tr-04-2.pd

    Not quite NFS, soft cache consistency for NFS

    No full text
    There are some constraints inherent in the NFSâ„¢ Ë› protocol that result in performance limitations for high performance workstation environments. This paper discusses an NFS-like protocol named Not Quite NFS (NQNFS), designed to address some of these limitations. This protocol provides full cache consistency during normal operation, while permitting more effective client-side caching in an effort to improve performance. There are also a variety of minor protocol changes, in order to resolve various NFS issues. The emphasis is on observed performance of a preliminary implementation of the protocol, in order to show how well this design works and to suggest possible areas for further improvement. 1

    Cooperative caching and prefetching in parallel/distributed file systems

    Get PDF
    If we examine the structure of the applications that run on parallel machines, we observe that their I/O needs increase tremendously every day. These applications work with very large data sets which, in most cases, do not fit in memory and have to be kept in the disk. The input and output data files are also very large and have to be accessed very fast. These large applications also want to be able to checkpoint themselves without wasting too much time. These facts constantly increase the expectations placed on parallel and distributed file systems. Thus, these file systems have to improve their performance to avoid becoming the bottleneck in parallel/distributed environments. On the other hand, while the performance of the new processors, interconnection networks and memory increases very rapidly, no such thing happens with the disk performance. This lack of improvement is due to the mechanical parts used to build the disks. These components are slow and limit both the latency and t..
    corecore