78 research outputs found

    Minimizing buffer requirements in video-on-demand servers

    Get PDF
    23rd Euromicro Conference EUROMICRO 97: 'New Frontiers of Information Technology', Budapest, Hungary, 1-4 Sept 1997Memory management is a key issue when designing cost effective video on demand servers. State of the art techniques, like double buffering, allocate buffers in a per stream basis and require huge amounts of memory. We propose a buffering policy, namely Single Pair of Buffers, that dramatically reduces server memory requirements by reserving a pair of buffers per storage device. By considering in detail disk and network interaction, we have also identified the particular conditions under which this policy can be successfully applied to engineer video on demand servers. Reduction factors of two orders of magnitude compared to the double buffering approach can be obtained. Current disk and network parameters make this technique feasible.Publicad

    Implementing and Evaluating Jukebox Schedulers Using JukeTools

    Get PDF
    Scheduling jukebox resources is important to build efficient and flexible hierarchical storage systems. JukeTools is a toolbox that helps in the complex tasks of implementing and evaluating jukebox schedulers. It allows the fast development of jukebox schedulers. The schedulers can be tested in numerous environments, both real and simulated types. JukeTools helps the developer to easily detect errors in the schedules. Analyzer tools create detailed reports on the behavior and performance of any of the scheduler, and provide comparisons between different schedulers. This paper describes the functionality offered by JukeTools, with special emphasis on how the toolbox can be used to develop jukebox schedulers

    Enhancement of Writeback Caching by changes in flush and its parameters

    Get PDF
    Achievement of high performance in computing or accessing of data is the aim of any system. Reduction of access time to a particular data which is present in the device is very important for the enhancement in the performance. Caching is implemented to do the same. The group of cache device and the virtual device is made as a cache group to enhance the performance of the system. The system may not be on the same condition different instances of time. There will always be a variation in io rates of the application, which is not utilized for the full extent. These differences in the io rates can be utilized effectively for the enhancement of the performance of the system. When the system is idle of with less io then the system will force the flush so that the inconsistency of data is reduced. When the system is being bombarded with io then less threads are given for the flush io. These variations in the threads assigned for the implementation of flush io will enhance the overall performance of the system

    An approximation algorithm for a generalized assignment problem with small resource requirements.

    Get PDF
    We investigate a generalized assignment problem where the resource requirements are either 1 or 2. This problem is motivated by a question that arises when data blocks are to be retrieved from parallel disks as efficiently as possible. The resulting problem is to assign jobs to machines with a given capacity, where each job takes either one or two units of machine capacity, and must satisfy certain assignment restrictions, such that total weight of the assigned jobs is maximized. We derive a 2/3-approximation result for this problem based on relaxing a formulation of the problem so that the resulting constraint matrix is totally unimodular. Further, we prove that the LP-relaxation of a special case of the problem is half-integral, and we derive a weak persistency property.Assignment; Constraint; Data; Matrix; Requirements;

    Random redundant storage in disk arrays: Complexity of retrieval problems

    Get PDF
    Random redundant data storage strategies have proven to be a good choice for efficient data storage in multimedia servers. These strategies lead to a retrieval problem in which it is decided for each requested data block which disk to use for its retrieval. In this paper, we give a complexity classification of retrieval problems for random redundant storage

    The Architecture and Performance Evaluation of iSCSI-Based United Storage Network Merging NAS and SAN

    Get PDF
    With the ever increasing volume of data in networks, the traditional storage architecture is greatly challenged; more and more people pay attention to network storage. Currently, the main technology of network storage is represented by NAS (Network Attached Storage) and SAN (Storage Area Network). They are different, but mutually complementary and used under different circumstances; however, both NAS and SAN may be needed in the same company. To reduce the TOC (total of cost), for easier implementation, etc., people hope to merge the two technologies. Additionally, the main internetworking technology of SAN is the Fibre Channel; however, the major obstacles are in its poor interoperability, lack of trained staff, high implementation costs, etc. To solve the above-mentioned issues, this paper creatively introduces a novel storage architecture called USN (United Storage Networks), which uses the iSCSI to build the storage network, and merges the NAS and SAN techniques supplying the virtues and overcoming the drawbacks of both, and provides both file I/O and block I/O service simultaneously

    Characterizing Synchronous Writes in Stable Memory Devices

    Full text link
    Distributed algorithms that operate in the fail-recovery model rely on the state stored in stable memory to guarantee the irreversibility of operations even in the presence of failures. The performance of these algorithms lean heavily on the performance of stable memory. Current storage technologies have a defined performance profile: data is accessed in blocks of hundreds or thousands of bytes, random access to these blocks is expensive and sequential access is somewhat better. File system implementations hide some of the performance limitations of the underlying storage devices using buffers and caches. However, fail-recovery distributed algorithms bypass some of these techniques and perform synchronous writes to be able to tolerate a failure during the write itself. Assuming the distributed system designer is able to buffer the algorithm's writes, we ask how buffer size and latency complement each other. In this paper we start to answer this question by characterizing the performance (throughput and latency) of typical stable memory devices using a representative set of current file systems.Comment: 14 page
    corecore