738 research outputs found

    MSF-Model: Modeling Metastable Failures in Replicated Storage Systems

    Full text link
    Metastable failure is a recent abstraction of a pattern of failures that occurs frequently in real-world distributed storage systems. In this paper, we propose a formal analysis and modeling of metastable failures in replicated storage systems. We focus on a foundational problem in distributed systems -- the problem of consensus -- to have an impact on a large class of systems. Our main contribution is the development of a queuing-based analytical model, MSF-Model, that can be used to characterize and predict metastable failures. MSF-Model integrates novel modeling concepts that allow modeling metastable failures which was interactable to model prior to our work. We also perform real experiments to reproduce and validate our model. Our real experiments show that MSF-Model predicts metastable failures with high accuracy by comparing the real experiment with the predictions from the queuing-based model

    Error-correction coding for high-density magnetic recording channels.

    Get PDF
    Finally, a promising algorithm which combines RS decoding algorithm with LDPC decoding algorithm together is investigated, and a reduced-complexity modification has been proposed, which not only improves the decoding performance largely, but also guarantees a good performance in high signal-to-noise ratio (SNR), in which area an error floor is experienced by LDPC codes.The soft-decision RS decoding algorithms and their performance on magnetic recording channels have been researched, and the algorithm implementation and hardware architecture issues have been discussed. Several novel variations of KV algorithm such as soft Chase algorithm, re-encoded Chase algorithm and forward recursive algorithm have been proposed. And the performance of nested codes using RS and LDPC codes as component codes have been investigated for bursty noise magnetic recording channels.Future high density magnetic recoding channels (MRCs) are subject to more noise contamination and intersymbol interference, which make the error-correction codes (ECCs) become more important. Recent research of replacement of current Reed-Solomon (RS)-coded ECC systems with low-density parity-check (LDPC)-coded ECC systems obtains a lot of research attention due to the large decoding gain for LDPC-coded systems with random noise. In this dissertation, systems aim to maintain the RS-coded system using recent proposed soft-decision RS decoding techniques are investigated and the improved performance is presented

    Toward the development and implementation of object-oriented extensions for discrete-event simulation in a strongly-typed procedural language

    Get PDF
    The primary emphasis of this research is computer simulation. Computer simulations are used to model and analyze systems. To date, computer simulations have almost exclusively been written in procedural, strongly-typed languages such as FORTRAN or Pascal;Recent advancements in simulation research suggest an object-oriented approach to simulation languages may provide key benefits in computer simulation. The goal of this research is to combine the advantages of a simulation language written in a procedural, strongly-typed language with the benefits available through the object-oriented programming paradigm;This research presents a review of the methods of computer simulation. A significant portion of this research is devoted to a description of the development of the object-oriented simulation software in a strongly-typed, procedural language;The software developed in this research is capable of simulating systems with multiple servers and queues. Arrival and service distributions may be selected from the uniform, exponential, and normal family of distributions. Resource usage is not supported in the simulation program

    On-line mass storage system functional design document

    Get PDF
    A functional system definition for an on-line high density magnetic tape data storage system is provided. This system can be implemented in a multi-purpose, multi-host environment, and satisfy the requirements of economical data storage in the range of 2 to 50 billion bytes

    Collaborative web whiteboard based on the SVG technology

    Get PDF
    As innovation on web standards proceeds, web browsers have the option of becoming the generic user interface for any computer application, even those involving extensive user interaction. This thesis is placed in the domain of web graphics, where we tried a standard based approach, relying on the SVG standard for web vector graphics. SVG is less supported than its propertary counterpart, Adobe Flash, but we use a recent shim library to bring SVG support to all browsers capable of running Flash content; this way, the advantages of the two technologies can be combined. The library, called ``SVG Web'', is unobtrusive with reference to the application code, simply requiring a certain style for javascript handling of SVG elements and sometimes simplifying the handling of the SVG tree and improving its portability also among native renderers. To investigate the our approach, we have developed a shared, web-based whiteboard which allows different users to interact across the network using standard web browsers and servers. The whiteboard is thought for quick sketches, and built with the requirement of being lightweight and portable both for the client and the server side. The resulting application is capable of providing a good level of user interaction, still remaining fairly simple and maintainable in its code. The important achieved outcome is the possibility of developing all client side logic with the same language (Javascript), with a strong integration between the graphic environment and the rest of client-side functions, including AJAX communications

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware
    • …
    corecore