1,694 research outputs found

    Data allocation in disk arrays with multiple raid levels

    Get PDF
    There has been an explosion in the amount of generated data, which has to be stored reliably because it is not easily reproducible. Some datasets require frequent read and write access. like online transaction processing applications. Others just need to be stored safely and read once in a while, as in data mining. This different access requirements can be solved by using the RAID (redundant array of inexpensive disks) paradigm. i.e., RAIDi for the first situation and RAID5 for the second situation. Furthermore rather than providing two disk arrays with RAID 1 and RAID5 capabilities, a controller can be postulated to emulate both. It is referred as a heterogeneous disk array (HDA). Dedicating a subset of disks to RAID 1 results in poor disk utilization, since RAIDi vs RAID5 capacity and bandwidth requirements are not known a priori. Balancing disk loads when disk space is shared among allocation requests, referred to as virtual arrays - VAs poses a difficult problem. RAIDi disk arrays have a higher access rate per gigabyte than RAID5 disk arrays. Allocating more VAs while keeping disk utilizations balanced and within acceptable bounds is the goal of this study. Given its size and access rate a VA\u27s width or the number of its Virtual Disks -VDs is determined. VDs allocations on physical disks using vector-packing heuristics, with disk capacity and bandwidth as the two dimensions are shown to be the best. An allocation is acceptable if it does riot exceed the disk capacity and overload disks even in the presence of disk failures. When disk bandwidth rather than capacity is the bottleneck, the clustered RAID paradigm is applied, which offers a tradeoff between disk space and bandwidth. Another scenario is also considered where the RAID level is determined by a classification algorithm utilizing the access characteristics of the VA, i.e., fractions of small versus large access and the fraction of write versus read accesses. The effect of RAID 1 organization on its reliability and performance is studied too. The effect of disk failures on the X-code two disk failure tolerant array is analyzed and it is shown that the load across disks is highly unbalanced unless in an NxN array groups of N stripes are randomly rotated

    Multi-Terabyte EIDE Disk Arrays running Linux RAID5

    Full text link
    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.Comment: Talk from the 2004 Computing in High Energy and Nuclear Physics (CHEP04), Interlaken, Switzerland, 27th September - 1st October 2004, 4 pages, LaTeX, uses CHEP2004.cls. ID 47, Poster Session 2, Track

    Redundant Arrays of IDE Drives

    Get PDF
    The next generation of high-energy physics experiments is expected to gather prodigious amounts of data. New methods must be developed to handle this data and make analysis at universities possible. We examine some techniques that use recent developments in commodity hardware. We test redundant arrays of integrated drive electronics (IDE) disk drives for use in offline high-energy physics data analysis. IDE redundant array of inexpensive disks (RAID) prices now equal the cost per terabyte of million-dollar tape robots! The arrays can be scaled to sizes affordable to institutions without robots and used when fast random access at low cost is important. We also explore three methods of moving data between sites; internet transfers, hot pluggable IDE disks in FireWire cases, and writable digital video disks (DVD-R).Comment: Submitted to IEEE Transactions On Nuclear Science, for the 2001 IEEE Nuclear Science Symposium and Medical Imaging Conference, 8 pages, 1 figure, uses IEEEtran.cls. Revised March 19, 2002 and published August 200

    Does science need computer science?

    No full text
    IBM Hursley Talks Series 3An afternoon of talks, to be held on Wednesday March 10 from 2:30pm in Bldg 35 Lecture Room A, arranged by the School of Chemistry in conjunction with IBM Hursley and the Combechem e-Science Project.The talks are aimed at science students (undergraduate and post-graduate) from across the faculty. This is the third series of talks we have organized, but the first time we have put them together in an afternoon. The talks are general in nature and knowledge of computer science is certainly not necessary. After the talks there will be an opportunity for a discussion with the lecturers from IBM.Does Science Need Computer Science?Chair and Moderator - Jeremy Frey, School of Chemistry.- 14:00 "Computer games for fun and profit" (*) - Andrew Reynolds - 14:45 "Anyone for tennis? The science behind WIBMledon" (*) - Matt Roberts - 15:30 Tea (Chemistry Foyer, Bldg 29 opposite bldg 35) - 15:45 "Disk Drive physics from grandmothers to gigabytes" (*) - Steve Legg - 16:35 "What could happen to your data?" (*) - Nick Jones - 17:20 Panel Session, comprising the four IBM speakers and May Glover-Gunn (IBM) - 18:00 Receptio

    Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    Get PDF
    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications
    corecore