3,034 research outputs found

    Finding Isolated Cliques by Queries -- An Approach to Fault Diagnosis with Many Faults

    Get PDF
    A well-studied problem in fault diagnosis is to identify the set of all good processors in a given set p1,p2,ldots,pn{p_1,p_2,ldots,p_n} of processors via asking some processors pip_i to test whether processor pjp_j is good or faulty. Mathematically, the set CC of the indices of good processors forms an isolated clique in the graph with the edges E = {(i,j): if you ask pip_i to test pjp_j then pip_i states that ``pjp_j is good\u27\u27}; where CC is an isolated clique iff it holds for every iinCi in C and jneqij neq i that (i,j)inE(i,j) in E iff jinCj in C. In the present work, the classical setting of fault diagnosis is modified by no longer requiring that CC contains at least fracn+12frac{n+1}{2} of the nn nodes of the graph. Instead, one is given a lower bound aa on the size of CC and the number nn of nodes and one has to find a list of up to n/an/a candidates containing all isolated cliques of size aa or more where the number of queries whether a given edge is in EE is as small as possible. It is shown that the number of queries necessary differs at most by nn for the case of directed and undirected graphs. Furthermore, for directed graphs the lower bound n2/(2a2)3nn^2/(2a-2)-3n and the upper bound 2n2/a2n^2/a are established. For some constant values of aa, better bounds are given. In the case of parallel queries, the number of rounds is at least n/(a1)6n/(a-1)-6 and at most O(log(a)n/a)O(log(a)n/a)

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    Protection Challenges of Distributed Energy Resources Integration In Power Systems

    Get PDF
    It is a century that electrical power system are the main source of energy for the societies and industries. Most parts of these infrastructures are built long time ago. There are plenty of high rating high voltage equipment which are designed and manufactured in mid-20th and are currently operating in United States’ power network. These assets are capable to do what they are doing now. However, the issue rises with the recent trend, i.e. DERs integration, causing fundamental changes in electrical power systems and violating traditional network design basis in various ways. Recently, there have been a steep rise in demands for Distributed Energy Resources (DERs) integration. There are various incentives for demand in such integrations and employment of distributed and renewable energy resources. However, it violates the most fundamental assumption in power system traditional designs. That is the power flows from the generation (upstream) toward the load locations (downstream). Currently operating power systems are designed based on this assumption and consequently their equipment ratings, operational details, protection schemes, and protections settings. Violating these designs and operational settings leads toward reducing the power reliability and increasing outages, which are opposite of the DERs integration goals. The DERs integration and its consequences happen in both transmission and distribution levels. Both of these networks effects of DERs integration are discussed in this dissertation. The transmission level issues are explained in brief and more analytical approach while the transmission network challenges are provided in details using both field data and simulation results. It is worth mentioning that DERs integration is aligned with the goal to lead toward a smart grid. This can be considered the most fundamental network reconfiguration that has ever experienced and requires various preparations. Both long term and short term solutions are proposed for the explained challenges and corresponding results are provided to illustrate the effectiveness of the proposed solutions. The author believes that developing and considering short term solutions can make the transition period toward reaching the smart grid possible. Meanwhile, long term approaches should also be planned for the final smart grid development and operation details

    Selection in the Presence of Memory Faults, with Applications to In-place Resilient Sorting

    Full text link
    The selection problem, where one wishes to locate the kthk^{th} smallest element in an unsorted array of size nn, is one of the basic problems studied in computer science. The main focus of this work is designing algorithms for solving the selection problem in the presence of memory faults. These can happen as the result of cosmic rays, alpha particles, or hardware failures. Specifically, the computational model assumed here is a faulty variant of the RAM model (abbreviated as FRAM), which was introduced by Finocchi and Italiano. In this model, the content of memory cells might get corrupted adversarially during the execution, and the algorithm is given an upper bound δ\delta on the number of corruptions that may occur. The main contribution of this work is a deterministic resilient selection algorithm with optimal O(n) worst-case running time. Interestingly, the running time does not depend on the number of faults, and the algorithm does not need to know δ\delta. The aforementioned resilient selection algorithm can be used to improve the complexity bounds for resilient kk-d trees developed by Gieseke, Moruz and Vahrenhold. Specifically, the time complexity for constructing a kk-d tree is improved from O(nlog2n+δ2)O(n\log^2 n + \delta^2) to O(nlogn)O(n \log n). Besides the deterministic algorithm, a randomized resilient selection algorithm is developed, which is simpler than the deterministic one, and has O(n+α)O(n + \alpha) expected time complexity and O(1) space complexity (i.e., is in-place). This algorithm is used to develop the first resilient sorting algorithm that is in-place and achieves optimal O(nlogn+αδ)O(n\log n + \alpha\delta) expected running time.Comment: 26 page

    GA-Based fault diagnosis algorithms for distributed systems

    Get PDF
    Distributed Systems are becoming very popular day-by-day due to their applications in various fields such as electronic automotives, remote environment control like underwater sensor network, K-connected networks. Faults may aect the nodes of the system at any time. So diagnosing the faulty nodes in the distributed system is an worst necessity to make the system more reliable and ecient. This thesis describes about dierent types of faults, system and fault model, those are already in literature. As the evolutionary approaches give optimum outcome than probabilistic approaches, we have developed Genetic algorithm based fault diagnosis algorithm which provides better result than other fault diagnosis algorithms. The GA-based fault diagnosis algorithm has worked upon dierent types of faults like permanent as well as intermittent faults in a K-connected system. Simulation results demonstrate that the proposed Genetic Algorithm Based Permanent Fault Diagnosis Algorithm(GAPFDA) and Genetic Algorithm Based Intermittent Fault Diagnosis Algorithm (GAIFDA) decreases the number of messages transferred and the time needed to diagnose the faulty nodes in a K-connected distributed system. The decrease in CPU time and number of steps are due to the application of supervised mutation in the fault diagnosis algorithms. The time complexity and message complexity of GAPFDA are analyzed as O(n*P*K*ng) and O(n*K) respectively. The time complexity and message complexity of GAIFDA are O(r*n*P*K*ng) and O(r*n*K) respectively, where ’n’ is the number of nodes, ’P’ is the population size, ’K’ is the connectivity of the network, ’ng’ is the number of generations (steps), ’r’ is the number of rounds. Along with the design of fault diagnosis algorithm of O(r*k) for diagnosing the transient-leading-to-permanent faults in the actuators of a k-fault tolerant Fly-by-wire(FBW) system, an ecient scheduling algorithm has been developed to schedule dierent tasks of a FBW system, here ’r’ denotes the number of rounds. The proposed algorithm for scheduling the task graphs of a multi-rate FBW system demonstrates that, maximization in microcontroller’s execution period reduces the number of microcontrollers needed for performing diagnosis

    Advanced flight control system study

    Get PDF
    A fly by wire flight control system architecture designed for high reliability includes spare sensor and computer elements to permit safe dispatch with failed elements, thereby reducing unscheduled maintenance. A methodology capable of demonstrating that the architecture does achieve the predicted performance characteristics consists of a hierarchy of activities ranging from analytical calculations of system reliability and formal methods of software verification to iron bird testing followed by flight evaluation. Interfacing this architecture to the Lockheed S-3A aircraft for flight test is discussed. This testbed vehicle can be expanded to support flight experiments in advanced aerodynamics, electromechanical actuators, secondary power systems, flight management, new displays, and air traffic control concepts

    Improving Fault Localization for Simulink Models using Search-Based Testing and Prediction Models

    Get PDF
    One promising way to improve the accuracy of fault localization based on statistical debugging is to increase diversity among test cases in the underlying test suite. In many practical situations, adding test cases is not a cost-free option because test oracles are developed manually or running test cases is expensive. Hence, we require to have test suites that are both diverse and small to improve debugging. In this paper, we focus on improving fault localization of Simulink models by generating test cases. We identify three test objectives that aim to increase test suite diversity. We use these objectives in a search-based algorithm to generate diversified but small test suites. To further minimize test suite sizes, we develop a prediction model to stop test generation when adding test cases is unlikely to improve fault localization. We evaluate our approach using three industrial subjects. Our results show (1) the three selected test objectives are able to significantly improve the accuracy of fault localization for small test suite sizes, and (2) our prediction model is able to maintain almost the same fault localization accuracy while reducing the average number of newly generated test cases by more than half

    PARALLEL EXECUTION TRACING: AN ALTERNATIVE SOLUTION TO EXPLOIT UNDER-UTILIZED RESOURCES IN MULTI-CORE ARCHITECTURES FOR CONTROL-FLOW CHECKING

    Get PDF
    In this paper, a software behavior-based technique is presented to detect control-flow errors in multi-core architectures. The analysis of a key point leads to introduce the proposed technique: employing under-utilized CPU resources in multi-core processors to check the execution flow of the programs concurrently and in parallel with the main executions. To evaluate the proposed technique, a quad-core processor system was used as the simulation environment, and the behavior of SPEC CPU2006 benchmarks were studied as the target to compare with conventional techniques. The experimental results, with regard to both detection coverage and performance overhead, demonstrate that on average about 94% of the control-flow errors can be detected by the proposed technique, more efficiently. This article has been retracted. Link to the retraction: http://casopisi.junis.ni.ac.rs/index.php/FUElectEnerg/article/view/337

    Effective Fault Localization of Automotive Simulink Models: Achieving the Trade-Off between Test Oracle Effort and Fault Localization Accuracy

    Get PDF
    One promising way to improve the accuracy of fault localization based on statistical debugging is to increase diversity among test cases in the underlying test suite. In many practical situations, adding test cases is not a cost-free option because test oracles are developed manually or running test cases is expensive. Hence, we require to have test suites that are both diverse and small to improve debugging. In this paper, we focus on improving fault localization of Simulink models by generating test cases. We identify four test objectives that aim to increase test suite diversity. We use four objectives in a search-based algorithm to generate diversified but small test suites. To further minimize test suite sizes, we develop a prediction model to stop test generation when adding test cases is unlikely to improve fault localization. We evaluate our approach using three industrial subjects. Our results show (1) expanding test suites used for fault localization using any of our four test objectives, even when the expansion is small, can significantly improve the accuracy of fault localization, (2) varying test objectives used to generate the initial test suites for fault localization does not have a significant impact on the fault localization results obtained based on those test suites, and (3) we identify an optimal configuration for prediction models to help stop test generation when it is unlikely to be beneficial. We further show that our optimal prediction model is able to maintain almost the same fault localization accuracy while reducing the average number of newly generated test cases by more than half
    corecore