38,619 research outputs found

    Multifaceted Faculty Network Design and Management: Practice and Experience Report

    Get PDF
    We report on our experience on multidimensional aspects of our faculty's network design and management, including some unique aspects such as campus-wide VLANs and ghosting, security and monitoring, switching and routing, and others. We outline a historical perspective on certain research, design, and development decisions and discuss the network topology, its scalability, and management in detail; the services our network provides, and its evolution. We overview the security aspects of the management as well as data management and automation and the use of the data by other members of the IT group in the faculty.Comment: 19 pages, 11 figures, TOC and index; a short version presented at C3S2E'11; v6: more proofreading, index, TOC, reference

    Co-operative authoring and collaboration over the World Wide Web : a thesis presented in partial fulfilment of the requirements for the degree of Master of Technology in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

    Get PDF
    Co-operative authoring and collaboration over the World Wide Web is looking at a future development of the Web. One of the reasons that Berners-Lee created the Web in 1989 was for collaboration and collaborative design. As the Web has limited collaboration at present this thesis looks specifically at co-operative authoring (the actual creation and editing of web pages) and generally at the collaboration surrounding this authoring. The goal of this thesis is to create an engine that is capable of supporting co-operative authoring and collaboration over the Web. In addition it would be a major advantage if the engine were flexible enough to allow the future development of other access methods, especially those that are web related, such as WebDAV, WAP, etc

    Improving block sharing in the Write Anywhere File Layout file system

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 41).It is often useful in modern file systems for several files to share one or more data blocks. Block sharing is used to improve storage utilization by storing only one copy of a block shared by multiple files or volumes. This thesis proposes an approach, called Space Maker, which uses garbage collection techniques to simplify the up-front cost of file system operations, moving some of the more difficult block tracking work, such as the work required to clean-up after a file delete, to a back-end garbage collector. Space Maker was developed on top of the WAFL file system used in NetApp hardware. The Space Maker is shown to have fast scan performance, while decreasing the front-end time to delete files. Other operations, like file creates and writes have similar performance to a baseline system. Under Space Maker, block sharing is simplified, making a possible for new file system features that rely on sharing to be implemented more quickly with good performance.by Travis R. Grusecki.M.Eng

    Computer vision based two-wheel self-balancing Rover featuring Arduino and Raspberry Pi

    Get PDF
    Holistic control system for a self-balancing robot with two wheels with several functionalities added to it, such as remote terminal control, and computer vision based algorithms

    Data production models for the CDF experiment

    Get PDF
    The data production for the CDF experiment is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production models that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the experiment.Comment: 8 pages, 9 figures; presented at HPC Asia2005, Beijing, China, Nov 30 - Dec 3, 200
    corecore