231 research outputs found

    Does science need computer science?

    No full text
    IBM Hursley Talks Series 3An afternoon of talks, to be held on Wednesday March 10 from 2:30pm in Bldg 35 Lecture Room A, arranged by the School of Chemistry in conjunction with IBM Hursley and the Combechem e-Science Project.The talks are aimed at science students (undergraduate and post-graduate) from across the faculty. This is the third series of talks we have organized, but the first time we have put them together in an afternoon. The talks are general in nature and knowledge of computer science is certainly not necessary. After the talks there will be an opportunity for a discussion with the lecturers from IBM.Does Science Need Computer Science?Chair and Moderator - Jeremy Frey, School of Chemistry.- 14:00 "Computer games for fun and profit" (*) - Andrew Reynolds - 14:45 "Anyone for tennis? The science behind WIBMledon" (*) - Matt Roberts - 15:30 Tea (Chemistry Foyer, Bldg 29 opposite bldg 35) - 15:45 "Disk Drive physics from grandmothers to gigabytes" (*) - Steve Legg - 16:35 "What could happen to your data?" (*) - Nick Jones - 17:20 Panel Session, comprising the four IBM speakers and May Glover-Gunn (IBM) - 18:00 Receptio

    Self-Repairing Disk Arrays

    Full text link
    As the prices of magnetic storage continue to decrease, the cost of replacing failed disks becomes increasingly dominated by the cost of the service call itself. We propose to eliminate these calls by building disk arrays that contain enough spare disks to operate without any human intervention during their whole lifetime. To evaluate the feasibility of this approach, we have simulated the behavior of two-dimensional disk arrays with n parity disks and n(n-1)/2 data disks under realistic failure and repair assumptions. Our conclusion is that having n(n+1)/2 spare disks is more than enough to achieve a 99.999 percent probability of not losing data over four years. We observe that the same objectives cannot be reached with RAID level 6 organizations and would require RAID stripes that could tolerate triple disk failures.Comment: Part of ADAPT Workshop proceedings, 2015 (arXiv:1412.2347

    Introduction to Multiprocessor I/O Architecture

    Get PDF
    The computational performance of multiprocessors continues to improve by leaps and bounds, fueled in part by rapid improvements in processor and interconnection technology. I/O performance thus becomes ever more critical, to avoid becoming the bottleneck of system performance. In this paper we provide an introduction to I/O architectural issues in multiprocessors, with a focus on disk subsystems. While we discuss examples from actual architectures and provide pointers to interesting research in the literature, we do not attempt to provide a comprehensive survey. We concentrate on a study of the architectural design issues, and the effects of different design alternatives

    SIMULATION AND MODELLING OF RAID 0 SYSTEM PERFORMANCE

    No full text
    RAID systems are fundamental components of modern storage infrastructures. It is therefore important to model their performance effectively. This paper describes a simulation model which predicts the cumulative distribution function of I/O request response time in a RAID 0 system consisting of homogeneous zoned disk drives. The model is constructed in a bottom-up manner, starting by abstracting a single disk drive as an M/G/1 queue. This is then extended to model a RAID 0 system using a split-merge queueing network. Simulation results of I/O request response time for RAID 0 systems with various numbers of disks are computed and compared against device measurements

    Enhancements in data redistribution strategies to increase efficiency of large data volumes in Scientific Clouds using FastScale

    Get PDF
    In many scientific Clouds, storing very large amounts of application data remains a great challenge. To provide necessary storage and performance support, one strategy is to distribute data over multiple disks using RAID technologies which are widely available and very robust. Adding new storage disks to cope with large amounts of Application data, requires proper parallel data redistribution techniques to maintain the performance of entire system. In this paper we describe various techniques and algorithms that take advantage from redistribution strategies aiming on increasing the performance of a scalable parallel disk array. We will summarize several recent methods and approaches like SCADDAR, SLAS, ALV and FastScale. We will describe FastScale implementation and propose an algorithm to take in account parity block position structures to enable parallel read/writes on the extended volumes. Numerical results show that FastScale outperforms SLAS under the same workloads. We conclude with a discussion of the expected performance or proposed algorithm and future works on performance evaluation
    • …
    corecore