27,071 research outputs found
Interfacing a high performance disk array file server to a Gigabit LAN
Our previous prototype, RAID-1, identified several bottlenecks in typical file server architectures. The most important bottleneck was the lack of a high-bandwidth path between disk, memory, and the network. Workstation servers, such as the Sun-4/280, have very slow access to peripherals on busses far from the CPU. For the RAID-2 system, we addressed this problem by designing a crossbar interconnect, Xbus board, that provides a 40MB/s path between disk, memory, and the network interfaces. However, this interconnect does not provide the system CPU with low latency access to control the various interfaces. To provide a high data rate to clients on the network, we were forced to carefully and efficiently design the network software. A block diagram of the system hardware architecture is given. In the following subsections, we describe pieces of the RAID-2 file server hardware that had a significant impact on the design of the network interface
A Server of Distributed Disk Pages Using a Configurable Software Bus
As network latency drops below disk latency, access time to a remote disk will
begin to approach local disk access time. The performance of I/O may then be
improved by spreading disk pages across several remote disk servers and
accessing disk pages in parallel. To research this we have prototyped a data
page server called a Page File. This persistent data type provides a set of
methods to access disk pages stored on a cluster of remote machines acting as
disk servers. The goal is to improve the throughput of database management
system or other I/O intensive application by accessing pages from remote disks
and incurring disk latency in parallel. This report describes the conceptual
foundation and the methods of access for our prototype.
(Also cross-referenced as UMIACS-TR-93-47
Low latency via redundancy
Low latency is critical for interactive networked applications. But while we
know how to scale systems to increase capacity, reducing latency --- especially
the tail of the latency distribution --- can be much more difficult. In this
paper, we argue that the use of redundancy is an effective way to convert extra
capacity into reduced latency. By initiating redundant operations across
diverse resources and using the first result which completes, redundancy
improves a system's latency even under exceptional conditions. We study the
tradeoff with added system utilization, characterizing the situations in which
replicating all tasks reduces mean latency. We then demonstrate empirically
that replicating all operations can result in significant mean and tail latency
reduction in real-world systems including DNS queries, database servers, and
packet forwarding within networks
An Experiment on Bare-Metal BigData Provisioning
Many BigData customers use on-demand platforms in the cloud, where they can get a dedicated virtual cluster in a couple of minutes and pay only for the time they use. Increasingly, there is a demand for bare-metal bigdata solutions for applications that cannot tolerate the unpredictability and performance degradation of virtualized systems. Existing bare-metal solutions can introduce delays of 10s of minutes to provision a cluster by installing operating systems and applications on the local disks of servers. This has motivated recent research developing sophisticated mechanisms to optimize this installation. These approaches assume that using network mounted boot disks incur unacceptable run-time overhead. Our analysis suggest that while this assumption is true for application data, it is incorrect for operating systems and applications, and network mounting the boot disk and applications result in negligible run-time impact while leading to faster provisioning time.This research was supported in part by the MassTech
Collaborative Research Matching Grant Program, NSF
awards 1347525 and 1414119 and several commercial
partners of the Massachusetts Open Cloud who may be
found at http://www.massopencloud.or
- …