45 research outputs found

    Cooperative Monitoring to Diagnose Multiagent Plans

    Get PDF
    Diagnosing the execution of a Multiagent Plan (MAP) means identifying and explaining action failures (i.e., actions that did not reach their expected effects). Current approaches to MAP diagnosis are substantially centralized, and assume that action failures are inde-pendent of each other. In this paper, the diagnosis of MAPs, executed in a dynamic and partially observable environment, is addressed in a fully distributed and asynchronous way; in addition, action failures are no longer assumed as independent of each other. The paper presents a novel methodology, named Cooperative Weak-Committed Moni-toring (CWCM), enabling agents to cooperate while monitoring their own actions. Coop-eration helps the agents to cope with very scarcely observable environments: what an agent cannot observe directly can be acquired from other agents. CWCM exploits nondetermin-istic action models to carry out two main tasks: detecting action failures and building trajectory-sets (i.e., structures representing the knowledge an agent has about the environ-ment in the recent past). Relying on trajectory-sets, each agent is able to explain its own action failures in terms of exogenous events that have occurred during the execution of the actions themselves. To cope with dependent failures, CWCM is coupled with a diagnostic engine that distinguishes between primary and secondary action failures. An experimental analysis demonstrates that the CWCM methodology, together with the proposed diagnostic inferences, are effective in identifying and explaining action failures even in scenarios where the system observability is significantly reduced. 1

    A Test Vector Minimization Algorithm Based On Delta Debugging For Post-Silicon Validation Of Pcie Rootport

    Get PDF
    In silicon hardware design, such as designing PCIe devices, design verification is an essential part of the design process, whereby the devices are subjected to a series of tests that verify the functionality. However, manual debugging is still widely used in post-silicon validation and is a major bottleneck in the validation process. The reason is a large number of tests vectors have to be analyzed, and this slows process down. To solve the problem, a test vector minimizer algorithm is proposed to eliminate redundant test vectors that do not contribute to reproduction of a test failure, hence, improving the debug throughput. The proposed methodology is inspired by the Delta Debugging algorithm which is has been used in automated software debugging but not in post-silicon hardware debugging. The minimizer operates on the principle of binary partitioning of the test vectors, and iteratively testing each subset (or complement of set) on a post-silicon System-Under-Test (SUT), to identify and eliminate redundant test vectors. Test results using test vector sets containing deliberately introduced erroneous test vectors show that the minimizer is able to isolate the erroneous test vectors. In test cases containing up to 10,000 test vectors, the minimizer requires about 16ns per test vector in the test case when only one erroneous test vector is present. In a test case with 1000 vectors including erroneous vectors, the same minimizer requires about 140μs per erroneous test vector that is injected. Thus, the minimizer’s CPU consumption is significantly smaller than the typical amount of time of a test running on SUT. The factors that significantly impact the performance of the algorithm are number of erroneous test vectors and distribution (spacing) of the erroneous vectors. The effect of total number of test vectors and position of the erroneous vectors are relatively minor compared to the other two. The minimization algorithm therefore was most effective for cases where there are only a few erroneous test vectors, with large number of test vectors in the set

    A model-based reasoning architecture for system-level fault diagnosis

    Get PDF
    This dissertation presents a model-based reasoning architecture with a two fold purpose: to detect and classify component faults from observable system behavior, and to generate fault propagation models so as to make a more accurate estimation of current operational risks. It incorporates a novel approach to system level diagnostics by addressing the need to reason about low-level inaccessible components from observable high-level system behavior. In the field of complex system maintenance it can be invaluable as an aid to human operators. The first step is the compilation of the database of functional descriptions and associated fault-specific features for each of the system components. The system is then analyzed to extract structural information, which, in addition to the functional database, is used to create the structural and functional models. A fault-symptom matrix is constructed from the functional model and the features database. The fault threshold levels for these symptoms are founded on the nominal baseline data. Based on the fault-symptom matrix and these thresholds, a diagnostic decision tree is formulated in order to intelligently query about the system health. For each faulty candidate, a fault propagation tree is generated from the structural model. Finally, the overall system health status report includes both the faulty components and the associated at risk components, as predicted by the fault propagation model.Ph.D.Committee Chair: Vachtsevanos, George; Committee Member: Liang, Steven; Committee Member: Michaels, Thomas; Committee Member: Vela, Patricio; Committee Member: Wardi, Yora

    Formal verification: further complexity issues and applications

    Get PDF
    Prof. Giacomo Cioffi (Università di Roma "La Sapienza"), Prof. Fabio Panzieri (Università di Bologna), Dott.ssa Carla Limongelli (Università di Roma Tre)
    corecore