480 research outputs found

    Intermittent/transient fault phenomena in digital systems

    Get PDF
    An overview of the intermittent/transient (IT) fault study is presented. An interval survivability evaluation of digital systems for IT faults is discussed along with a method for detecting and diagnosing IT faults in digital systems

    Multilevel distributed diagnosis and the design of a distributed network fault detection system based on the SNMP protocol.

    Get PDF
    In this thesis, we propose a new distributed diagnosis algorithm using the multilevel paradigm. This algorithm is a generalization of both the ADSD and Hi-ADSD algorithms. We present all details of the design and implementation of this multilevel adaptive distributed diagnosis algorithm called the ML-ADSD algorithm. We also present extensive simulation results comparing the performance of these three algorithms.In 1967, Preparata, Metze and Chien proposed a model and a framework for diagnosing faulty processors in a multiprocessor system. To exploit the inherent parallelism available in a multiprocessor system and thereby improving fault tolerance, Kuhl and Reddy, in 1980, pioneered a new area of research known as distributed system level diagnosis. Following this pioneering work, in 1991, Bianchini and Buskens proposed an adaptive distributed algorithm to diagnose fully connected networks. This algorithm called the ADSD algorithm has a diagnosis latency of O(N) testing rounds for a network with N nodes. With a view to improving the diagnosis latency of the ADSD algorithm, in 1998 Duarte and Nanya proposed a hierarchical distributed diagnosis algorithm for fully connected networks. This algorithm called the Hi-ADSD algorithm has a diagnosis latency of O(log2N) testing rounds. The Hi-ADSD algorithm can be viewed as a generalization of the ADSD algorithm.In all cases, the time required by the ML-ADSD algorithm is better than or the same as for the Hi-ADSD algorithm. The performance of the ML-ADSD algorithm can be improved by an appropriate choice of the number of clusters and the number of levels. Also, the ML-ADSD algorithm is scalable in the sense that only some minor modifications will be required to adapt the algorithm to networks of varying sizes. This property is not shared by the Hi-ADSD algorithm. The primary application of our research is to develop and implement a prototype network fault detection/monitoring system by integrating the ML-ADSD algorithm into a SNMP-based (Simple Network Management Protocol) fault management system. We report the details of the design and implementation of such a distributed network fault detection system

    Design of a fault tolerant airborne digital computer. Volume 1: Architecture

    Get PDF
    This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive

    Data fusion for system modeling, performance assessment and improvement

    Get PDF
    Due to rapid advancements in sensing and computation technology, multiple types of sensors have been embedded in various applications, on-line automatically collecting massive production information. Although this data-rich environment provides great opportunity for more effective process control, it also raises new research challenges on data analysis and decision making due to the complex data structures, such as heterogeneous data dependency, and large-volume and high-dimensional characteristics. This thesis contributes to the area of System Informatics and Control (SIAC) to develop systematic data fusion methodologies for effective quality control and performance improvement in complex systems. These advanced methodologies enable (1) a better handling of the rich data environment communicated by complex engineering systems, (2) a closer monitoring of the system status, and (3) a more accurate forecasting of future trends and behaviors. The research bridges the gaps in methodologies among advanced statistics, engineering domain knowledge and operation research. It also forms close linkage to various application areas such as manufacturing, health care, energy and service systems. This thesis started from investigating the optimal sensor system design and conducting multiple sensor data fusion analysis for process monitoring and diagnosis in different applications. In Chapter 2, we first studied the couplings or interactions between the optimal design of a sensor system in a Bayesian Network and quality management of a manufacturing system, which can improve cost-effectiveness and production yield by considering sensor cost, process change detection speed, and fault diagnosis accuracy in an integrated manner. An algorithm named “Best Allocation Subsets by Intelligent Search” (BASIS) with optimality proof is developed to obtain the optimal sensor allocation design at minimum cost under different user specified detection requirements. Chapter 3 extended this line of research by proposing a novel adaptive sensor allocation framework, which can greatly improve the monitoring and diagnosis capabilities of the previous method. A max-min criterion is developed to manage sensor reallocation and process change detection in an integrated manner. The methodology was tested and validated based on a hot forming process and a cap alignment process. Next in Chapter 4, we proposed a Scalable-Robust-Efficient Adaptive (SERA) sensor allocation strategy for online high-dimensional process monitoring in a general network. A monitoring scheme of using the sum of top-r local detection statistics is developed, which is scalable, effective and robust in detecting a wide range of possible shifts in all directions. This research provides a generic guideline for practitioners on determining (1) the appropriate sensor layout; (2) the “ON” and “OFF” states of different sensors; and (3) which part of the acquired data should be transmitted to and analyzed at the fusion center, when only limited resources are available. To improve the accuracy of remaining lifetime prediction, Chapter 5 proposed a data-level fusion methodology for degradation modeling and prognostics. When multiple sensors are available to measure the degradation mechanism of a same system, it becomes a high dimensional and challenging problem to determine which sensors to use and how to combine them together for better data analysis. To address this issue, we first defined two essential properties if present in a degradation signal, can enhance the effectiveness for prognostics. Then, we proposed a generic data-level fusion algorithm to construct a composite health index to achieve those two identified properties. The methodology was tested using the degradation signals of aircraft gas turbine engine, which demonstrated a much better prognostic result compared to relying solely on the data from an individual sensor. In summary, this thesis is the research drawing attention to the area of data fusion for effective employment of the underlying data gathering capabilities for system modeling, performance assessment and improvement. The fundamental data fusion methodologies are developed and further applied to various applications, which can facilitate resources planning, real-time monitoring, diagnosis and prognostics.Ph.D

    MRI software measurement of osteophyte volume in knee osteoarthritis: a longitudinal validation study

    Full text link
    Osteoarthritis (OA) currently affects 41 million Americans, and knee OA (KOA) alone causes the highest risk of mobility disability of any medical condition in people 65 years and older. There are no current treatments to reverse the degenerative changes of KOA, and research is aimed at finding biomarkers of KOA progression to aid in the development of effective therapies. Osteophytes are a hallmark feature of KOA and may act as a biomarker of joint space loss and pain progression. MR imaging, which is an accurate and non-invasive method to monitor KOA disease status, may aid in clarifying the role of osteophytes in KOA, especially using semi-automated quantitative software methods to accurately and efficiently calculate osteophyte volume in longitudinal studies. This study investigated the association of osteophyte volume change with joint space narrowing and pain progression in a randomized sample of 505 subjects from the FNIH OA Biomarker Consortium Project, a case-control study based on a larger longitudinal study of patients with KOA. We also aimed to further validate a software method that measured osteophyte volume in MRI. We found a moderate and significant association with osteophyte volume and joint space narrowing, but no significant association with pain progression. The software was further validated as responsive and efficient method to measure KOA osteophyte volume change

    Fault-tolerant software: dependability/performance trade-offs, concurrency and system support

    Get PDF
    PhD ThesisAs the use of computer systems becomes more and more widespread in applications that demand high levels of dependability, these applications themselves are growing in complexity in a rapid rate, especially in the areas that require concurrent and distributed computing. Such complex systems are very prone to faults and errors. No matter how rigorously fault avoidance and fault removal techniques are applied, software design faults often remain in systems when they are delivered to the customers. In fact, residual software faults are becoming the significant underlying cause of system failures and the lack of dependability. There is tremendous need for systematic techniques for building dependable software, including the fault tolerance techniques that ensure software-based systems to operate dependably even when potential faults are present. However, although there has been a large amount of research in the area of fault-tolerant software, existing techniques are not yet sufficiently mature as a practical engineering discipline for realistic applications. In particular, they are often inadequate when applied to highly concurrent and distributed software. This thesis develops new techniques for building fault-tolerant software, addresses the problem of achieving high levels of dependability in concurrent and distributed object systems, and studies system-level support for implementing dependable software. Two schemes are developed - the t/(n-l)-VP approach is aimed at increasing software reliability and controlling additional complexity, while the SCOP approach presents an adaptive way of dynamically adjusting software reliability and efficiency aspects. As a more general framework for constructing dependable concurrent and distributed software, the Coordinated Atomic (CA) Action scheme is examined thoroughly. Key properties of CA actions are formalized, conceptual model and mechanisms for handling application level exceptions are devised, and object-based diversity techniques are introduced to cope with potential software faults. These three schemes are evaluated analytically and validated by controlled experiments. System-level support is also addressed with a multi-level system architecture. An architectural pattern for implementing fault-tolerant objects is documented in detail to capture existing solutions and our previous experience. An industrial safety-critical application, the Fault-Tolerant Production Cell, is used as a case study to examine most of the concepts and techniques developed in this research.ESPRIT

    Distributed state verification in the smart grid using physical attestation

    Get PDF
    A cyber process in a distributed system can fabricate its internal state in its communications with its peers. These state fabrications can cause other processes in the distributed system to make incorrect control decisions. Cyber-physical systems have a unique advantage in the detection of falsified states because processes typically have observable effects on a shared physical infrastructure. This physical infrastructure acts as a high-integrity message channel that broadcasts changes in individual process states. The objective of this research is to demonstrate that there are cases where physical feedback from the shared infrastructure can be used to detect state fabrications. To that end, this work introduces a distributed security mechanism called physical attestation that detects state fabrications in the future smart grid. Graph theory is used to prove that physical attestation works in general smart grid topologies, and the theory is supported with experimental results obtained from a smart grid test bed --Abstract, page iii

    Taxonomic surrogacy in biodiversity assessments, and the meaning of Linnaean ranks

    Get PDF
    Copyright © 2006 The Natural History MuseumThe majority of biodiversity assessments use species as the base unit. Recently, a series of studies have suggested replacing numbers of species with higher ranked taxa (genera, families, etc.); a method known as taxonomic surrogacy that has an important potential to save time and resources in assesments of biological diversity. We examine the relationships between taxa and ranks, and suggest that species/higher taxon exchanges are founded on misconceptions about the properties of Linnaean classification. Rank allocations in current classifications constitute a heterogeneous mixture of various historical and contemporary views. Even if all taxa were monophyletic, those referred to the same rank would simply denote separate clades without further equivalence. We conclude that they are no more comparable than any other, non-nested taxa, such as, for example, the genus Rattus and the phylum Arthropoda, and that taxonomic surrogacy lacks justification. These problems are also illustrated with data of polychaetous annelid worms from a broad-scale study of benthic biodiversity and species distributions in the Irish Sea. A recent consensus phylogeny for polychaetes is used to provide three different family-level classifications of polychaetes. We use families as a surrogate for species, and present Shannon–Wiener diversity indices for the different sites and the three different classifications, showing how the diversity measures rely on subjective rank allocations.Y. Bertrand, F. Pleijel and G. W. Rous
    corecore