12 research outputs found

    Random Duplicate Storage Strategies for Load Balancing in Multimedia Servers

    No full text
    this paper we use randomization and data redundancy to enable good load balancing. We focus on duplicate storage strategies, i.e., each data block is stored twice. This means that a request for a block can be serviced by two disks. A consequence of such a storage strategy is that we have to decide for each block which disk to use for its retrieval. This results in a so-called retrieval selection problem. We describe a graph model for duplicate storage strategies and derive polynomial time optimization algorithms for the retrieval selection problems of several storage strategies. Our model unifies and generalizes chained declustering and random duplicate assignment strategies. Simulation results and a probabilistic analysis complete this pape

    Scan chain design for test time reduction in core-based ICs

    No full text
    The size of the test vector set forms a significant factor in the overall production costs of ICs, as it defines the test application time and the required pin memory size of the test equipment. Large core-based ICs often require a very large test vector set for a high test coverage. This paper deals with the design of scan chains as transport mechanism for test patterns from IC pins to embedded cores and vice versa. The number of pins available to accommodate scan test is given, as well as the number of scan test patterns and scannable flip flops of each core. We present and analyze three scan chain architectures for core-based ICs, which aim at a minimum test vector set size. We give experimental results of the three architectures for an industrial IC. Furthermore we analyze the test time consequences of reusing cores with fixed internal scan chains in multiple ICs with varying design parameters

    On the Guaranteed Throughput of Multi-Zone Disks

    No full text
    We derive the guaranteed throughput of a multi-zone disk that repeatedly handles batches of n requests of constant size. Using this guaranteed throughput in the design of multimedia systems, one can admit more streams or get smaller buffer requirements and guaranteed response times than when an existing lower bound is used. We consider the case that nothing can be assumed about the location of the requests on the disk. Furthermore, we assume that successive batches are handled one after the other, where the n requests in a batch are retrieved using a SCAN-based sweep strategy. We show that we only have to consider two successive batches to determine the guaranteed throughput. Using this, we can compute the guaranteed throughput by determining a maximumweighted path in a directed acyclic graph in O(z ) time, where z max is the number of zones of the disk

    Improving disk efficiency in video servers by random redundant storage

    Get PDF
    Random redundant storage strategies have proven to be an interesting solution for the problem of storing data in a video server. Several papers describe how a good load bal ance is obtained by using the freedom of choice for the data blocks that are stored more than once. We improve on these results by exploiting the multi-zone character of hard disks. In our model of the load balancing problem we incorporate the actual transfer times of the blocks, depending on the zones in which the blocks are stored. We give an MILP model of the load balancing problem which we use to de rive a number of good load balancing algorithms. We show that, by using these algorithms, the amount of data that is read from the fast zones is substantially larger than with conventional strategies, such that the disks are used more efficiently
    corecore