38,619 research outputs found
Recommended from our members
Tracking Back References in a Write-Anywhere File System
Many file systems reorganize data on disk, for example to
defragment storage, shrink volumes, or migrate data between
different classes of storage. Advanced file system features such as snapshots, writable clones, and deduplication make these tasks complicated, as moving a single block may require finding and updating dozens, or even hundreds, of pointers to it.
We present Backlog, an efficient implementation of explicit back references, to address this problem. Back references are file system meta-data that map physical block numbers to the data objects that use them. We show that by using LSM-Trees and exploiting the write-anywhere behavior of modern file systems such as NetApp R WAFL R or btrfs, we can maintain back reference meta-data with minimal overhead (one extra disk I/O per 102 block operations) and provide excellent query performance for the common case of queries covering ranges of physically adjacent blocks.Engineering and Applied Science
Multifaceted Faculty Network Design and Management: Practice and Experience Report
We report on our experience on multidimensional aspects of our faculty's
network design and management, including some unique aspects such as
campus-wide VLANs and ghosting, security and monitoring, switching and routing,
and others. We outline a historical perspective on certain research, design,
and development decisions and discuss the network topology, its scalability,
and management in detail; the services our network provides, and its evolution.
We overview the security aspects of the management as well as data management
and automation and the use of the data by other members of the IT group in the
faculty.Comment: 19 pages, 11 figures, TOC and index; a short version presented at
C3S2E'11; v6: more proofreading, index, TOC, reference
Co-operative authoring and collaboration over the World Wide Web : a thesis presented in partial fulfilment of the requirements for the degree of Master of Technology in Computer Systems Engineering at Massey University, Palmerston North, New Zealand
Co-operative authoring and collaboration over the World Wide Web is looking at a future development of the Web. One of the reasons that Berners-Lee created the Web in 1989 was for collaboration and collaborative design. As the Web has limited collaboration at present this thesis looks specifically at co-operative authoring (the actual creation and editing of web pages) and generally at the collaboration surrounding this authoring. The goal of this thesis is to create an engine that is capable of supporting co-operative authoring and collaboration over the Web. In addition it would be a major advantage if the engine were flexible enough to allow the future development of other access methods, especially those that are web related, such as WebDAV, WAP, etc
Improving block sharing in the Write Anywhere File Layout file system
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 41).It is often useful in modern file systems for several files to share one or more data blocks. Block sharing is used to improve storage utilization by storing only one copy of a block shared by multiple files or volumes. This thesis proposes an approach, called Space Maker, which uses garbage collection techniques to simplify the up-front cost of file system operations, moving some of the more difficult block tracking work, such as the work required to clean-up after a file delete, to a back-end garbage collector. Space Maker was developed on top of the WAFL file system used in NetApp hardware. The Space Maker is shown to have fast scan performance, while decreasing the front-end time to delete files. Other operations, like file creates and writes have similar performance to a baseline system. Under Space Maker, block sharing is simplified, making a possible for new file system features that rely on sharing to be implemented more quickly with good performance.by Travis R. Grusecki.M.Eng
Recommended from our members
Integrity static analysis of COTS/SOUP
This paper describes the integrity static analysis approach developed to support the justification of commercial off-the-shelf software (COTS) used in a safety-related system. The static analysis was part of an overall software qualification programme, which also included the work reported in our paper presented at Safecomp 2002. Integrity static analysis focuses on unsafe language constructs and “covert” flows, where one thread can affect the data or control flow of another thread. The analysis addressed two main aspects: the internal integrity of the code (especially for the more critical functions), and the intra-component integrity, checking for covert channels. The analysis process was supported by an aggregation of tools, combined and engineered to support the checks done and to scale as necessary. Integrity static analysis is feasible for industrial scale software, did not require unreasonable resources and we provide data that illustrates its contribution to the software qualification programme
Computer vision based two-wheel self-balancing Rover featuring Arduino and Raspberry Pi
Holistic control system for a self-balancing robot with two wheels with several functionalities added to it, such as remote terminal control, and computer vision based algorithms
Data production models for the CDF experiment
The data production for the CDF experiment is conducted on a large Linux PC
farm designed to meet the needs of data collection at a maximum rate of 40
MByte/sec. We present two data production models that exploits advances in
computing and communication technology. The first production farm is a
centralized system that has achieved a stable data processing rate of
approximately 2 TByte per day. The recently upgraded farm is migrated to the
SAM (Sequential Access to data via Metadata) data handling system. The software
and hardware of the CDF production farms has been successful in providing large
computing and data throughput capacity to the experiment.Comment: 8 pages, 9 figures; presented at HPC Asia2005, Beijing, China, Nov 30
- Dec 3, 200
- …