18 research outputs found

    Operation Graph Oriented Correlation of ASIC Chip Internal Information for Hardware Debug

    Get PDF
    This thesis presents a novel approach to operation-centric tracing for hardware debug with a retrospective analysis of traces which are distributed across a computer system. Therefore, these traces record entries about the operations at runtime, and a software tool correlates these entries after a problem occurred. This tool is based on a generic method using identifiers saved from operations. Because identifiers are changed along the path of an operation through the system and traces record different information, the entries are transformed to find matching entries in other traces. After the correlation, the method reconstructs the operation paths with help of an operation graph which describes for each type of operation the subtasks and their sequence. With these paths the designer gets a better overview about the chip or system activity, and can isolate the problem cause faster. The TRACE MATCHER implements the described method and it is evaluated with an example bridge chip. Therefore, the benefit for hardware debug, correctness of the reconstructed paths, the performance of their Implementation, and the configuration effort are evaluated. At the end guidelines for trace and system design describe how matching can be improved by carefully designed identifiers at operations

    High Availability and Scalability of Mainframe Environments using System z and z/OS as example

    Get PDF
    Mainframe computers are the backbone of industrial and commercial computing, hosting the most relevant and critical data of businesses. One of the most important mainframe environments is IBM System z with the operating system z/OS. This book introduces mainframe technology of System z and z/OS with respect to high availability and scalability. It highlights their presence on different levels within the hardware and software stack to satisfy the needs for large IT organizations

    Water borne transport of high level nuclear waste in very deep borehole disposal of high level nuclear waste

    Get PDF
    Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 52).The purpose of this report is to examine the feasibility of the very deep borehole experiment and to determine if it is a reasonable method of storing high level nuclear waste for an extended period of time. The objective of this thesis is to determine the escape mechanisms of radionuclides and to determine if naturally occurring salinity gradients could counteract this phenomenon. Because of the large dependence on the water density, the relationship between water density and the salinity was measured and agreed with the literature values with a less than 1% difference. The resultant relationship between the density and salinity is a linear relationship with the molality, and dependent upon the number of ions of the dissolved salt (e.g. CaCl₂ contains 3 and NaCl has 2). From the data, it was calculated that within a borehole with a host rock porosity of 10-⁵ Darcy, it would take approximately 10⁵ years for the radionuclides to escape. As the rock porosity decreases, the escape time scale increases, and the escape fraction decreases exponentially. Due to the conservative nature of the calculations, the actual escape timescale would be closer to 106 years and dominated by 1-129 in a reducing atmosphere. The expected borehole salinity values can offset the buoyancy effect due to a 50°C temperature increase.by Dion Tunick Cabeche.S.B

    Hard and Soft Error Resilience for One-sided Dense Linear Algebra Algorithms

    Get PDF
    Dense matrix factorizations, such as LU, Cholesky and QR, are widely used by scientific applications that require solving systems of linear equations, eigenvalues and linear least squares problems. Such computations are normally carried out on supercomputers, whose ever-growing scale induces a fast decline of the Mean Time To Failure (MTTF). This dissertation develops fault tolerance algorithms for one-sided dense matrix factorizations, which handles Both hard and soft errors. For hard errors, we propose methods based on diskless checkpointing and Algorithm Based Fault Tolerance (ABFT) to provide full matrix protection, including the left and right factor that are normally seen in dense matrix factorizations. A horizontal parallel diskless checkpointing scheme is devised to maintain the checkpoint data with scalable performance and low space overhead, while the ABFT checksum that is generated before the factorization constantly updates itself by the factorization operations to protect the right factor. In addition, without an available fault tolerant MPI supporting environment, we have also integrated the Checkpoint-on-Failure(CoF) mechanism into one-sided dense linear operations such as QR factorization to recover the running stack of the failed MPI process. Soft error is more challenging because of the silent data corruption, which leads to a large area of erroneous data due to error propagation. Full matrix protection is developed where the left factor is protected by column-wise local diskless checkpointing, and the right factor is protected by a combination of a floating point weighted checksum scheme and soft error modeling technique. To allow practical use on large scale system, we have also developed a complexity reduction scheme such that correct computing results can be recovered with low performance overhead. Experiment results on large scale cluster system and multicore+GPGPU hybrid system have confirmed that our hard and soft error fault tolerance algorithms exhibit the expected error correcting capability, low space and performance overhead and compatibility with double precision floating point operation

    A SEASAT report. Volume 1: Program summary

    Get PDF
    The program background and experiment objectives are summarized, and a description of the organization and interfaces of the project are provided. The mission plan and history are also included as well as user activities and a brief description of the data system. A financial and manpower summary and preliminary results of the mission are also included
    corecore