973 research outputs found

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    High Availability and Scalability of Mainframe Environments using System z and z/OS as example

    Get PDF
    Mainframe computers are the backbone of industrial and commercial computing, hosting the most relevant and critical data of businesses. One of the most important mainframe environments is IBM System z with the operating system z/OS. This book introduces mainframe technology of System z and z/OS with respect to high availability and scalability. It highlights their presence on different levels within the hardware and software stack to satisfy the needs for large IT organizations

    Computer Center Bulletin / 1991-04-02

    Get PDF

    Software fault tolerance in computer operating systems

    Get PDF
    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved

    Computer Center Bulletin / February 26, 1988

    Get PDF
    This publication is published as required and is written by members of the staff, W. R. Church Computer Cente

    Computer Center Bulletin / July 5, 1990

    Get PDF
    This publication is published as required and is written by members of the staff, W. R. Church Computer Cente

    Database machines in support of very large databases

    Get PDF
    Software database management systems were developed in response to the needs of early data processing applications. Database machine research developed as a result of certain performance deficiencies of these software systems. This thesis discusses the history of database machines designed to improve the performance of database processing and focuses primarily on the Teradata DBC/1012, the only successfully marketed database machine that supports very large databases today. Also reviewed is the response of IBM to the performance needs of its database customers; this response has been in terms of improvements in both software and hardware support for database processing. In conclusion, an analysis is made of the future of database machines, in particular the DBC/1012, in light of recent IBM enhancements and its immense customer base

    Execution Batch Monitor for Processing Student Jobs

    Get PDF
    This study deals with the investigation and implementation of a method of reducing operating system overhead for the short-running student jobs at Oklahoma State University. The method chosen is that of an execution batch monitor which eliminates much of the job overhead in processing these student jobs. Sound operating system principles and techniques are studied and incorporated into the monitor, as it assumes some of the operating system functions for the jobs which it processes.Computing and Information Science

    Integrating legacy mainframe systems: architectural issues and solutions

    Get PDF
    For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most. In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged. Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment. However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications. The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies

    Evaluation of the results of orthodontic treatment by non-rigid image registration and deformation-based morphometry

    Get PDF
    The goal of this research was to find out, whether the non-rigid registration of dental casts can be used in the evaluation of orthodontic treatment and to develop a program, which would at least partially automatize the evaluation process of images. The aim was also to experiment the evaluation of three-dimensional models of the casts. This research was delimited to cover only the evaluation of malocclusions within the dental arch. The relationships between the dental arches were not considered. This thesis was done in the University of Vaasa at the Department of Electrical Engineering and Energy Technology as a part of the HammasSkanneri research project, whose aim is to automatize the digitization and archiving of dental casts. This research used two-dimensional images of dental casts which were taken of orthodontically treated patients before and after orthodontic treatment. Non-rigid registration was performed by using a registration tool of Fiji software. The evaluation of the accuracy of the registration was performed by measuring distances between manually inserted landmarks, and by comparing the linear and angular parameters of the registered images and the original target images. The displacements of the teeth were approximated with the help of deformation-based morphometry. The accuracy of registration is within reasonable error limits, if the image is taken straight from above of the cast and the registration is performed with the help of landmarks inserted by a human. Estimation of the changes showed that the movement of teeth can be coarsely measured by using deformation-based morphometry based on change estimates that resemble the Jacobian estimates. A set of programs, which partially automatize the evaluation of the accuracy and the changes, were developed. Three-dimensional imaging of the casts was unsuccessful, and thus the development of 3D evaluation system was left as a future research topic.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format
    corecore