33 research outputs found

    A 500 megabyte/second disk array

    Get PDF
    Applications at the Army High Performance Computing Research Center's (AHPCRC) Graphic and Visualization Laboratory (GVL) at the University of Minnesota require a tremendous amount of I/O bandwidth and this appetite for data is growing. Silicon Graphics workstations are used to perform the post-processing, visualization, and animation of multi-terabyte size datasets produced by scientific simulations performed of AHPCRC supercomputers. The M.A.X. (Maximum Achievable Xfer) was designed to find the maximum achievable I/O performance of the Silicon Graphics CHALLENGE/Onyx-class machines that run these applications. Running a fully configured Onyx machine with 12-150MHz R4400 processors, 512MB of 8-way interleaved memory, 31 fast/wide SCSI-2 channel each with a Ciprico disk array controller we were able to achieve a maximum sustained transfer rate of 509.8 megabytes per second. However, after analyzing the results it became clear that the true maximum transfer rate is somewhat beyond this figure and we will need to do further testing with more disk array controllers in order to find the true maximum

    File System Benchmarks, Then, Now, and Tomorrow

    No full text
    With the growing popularity of storage area networks (SANs) and clustered, shared file systems, the file system is becoming a distinct and critical part of a system environment. Because the file system mitigates access to data on a mass storage subsystem, it has certain behavioral and functional characteristics that affect I/O performance from an application and/or system point of view. Measuring file system performance is significantly more complicated than that of the underlying disk subsystem because of the many types of higher-level operations that can be performed (allocations, deletions, directory searches, ...etc.). The tasks of measuring and characterizing the performance of a file system is further complicated by SANs and emerging clustering technologies that add a distributed aspect to the file systems themselves. Similarly, as the cluster/SAN grows in size, so does the task of performance measurement. The objective of this study is to identify some of the more significant issues involved with file system benchmarking in a highly scalable clustered environment

    The Global File System

    Get PDF
    The Global File System (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network like Fibre Channel. Networks and network attached storage devices have advanced to a level of performance and extensibility that the once believed disadvantages of “shared disk ” architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of device technologies where as the client–server architecture diminishes a device’s role to a simple components. GFS distributes the file system responsibilities across the processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage device controllers to facilitate atomic read–modify– write operations. The locking mechanism is being prototyped on Seagate disks drives and Ciprico disk arrays. GFS is implemented in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and utilities
    corecore