3,639 research outputs found
Implementing a flexible failure detector that expresses the confidence in the system
International audienceTraditional unreliable failure detectors are per process oracles that provide a list of nodes suspected of having failed. Previously, we introduced the Impact failure detector that outputs a trust level value which is the degree of confidence in the system. An impact factor is assigned to each node and the trust level is equal to the sum of the impact factors of the nodes not suspected to have failed. An input threshold parameter definesan impact factor limit value, over which the confidence degree on the system is ensured. The impact factor indicates the relative importance of the process in the set S, while the threshold offers a degree of flexibility for failures and false suspicions.We propose in this article two different algorithms, based on query-response message rounds, that implement the Impact FD whose conceptions were tailored to satisfy the Impact FD’s flexibility. The first one exploits the time-free message pattern approach while the second one considers a set of bounded timely responses. We also introduced the concept that a process can be PS−accessible (or ♦PS−accessible) which guarantees that the system S will always (or eventually always) be trusted to this process as well as two properties, P R(IT ) and PR(♦IT ), that characterize the minimum necessary stability condition of S that ensures confidence (or eventual confidence) on it. In both implementations, if the process that monitors S is P S−accessible or ♦PS−accessible, at every query round, it only waits (or eventually only waits) for a set of responsethat satisfy the threshold. A crucial facet of this set of processes is that it is not fixed, i.e., the set of processes can change at each round, which is in accordance with the flexibility capacity of the Impact FD
Impact FD: An Unreliable Failure Detector Based on Process Relevance and Confidence in the System
International audienceThis paper presents a new unreliable failure detector, called the Impact failure detector (FD) that, contrarily to the majority of traditional FDs, outputs a trust level value which expresses the degree of confidence in the system. An impact factor is assigned to each process and the trust level is equal to the sum of the impact factors of the processes not suspected of failure. Moreover, a threshold parameter defines a lower bound value for the trust level, over which the confidence in the system is ensured. In particular, we defined a f l exi bi l i t y property that denotes the capacity of the Impact FD to tolerate a certain margin of failures or false suspicions, i.e., its capacity of considering different sets of responses that lead the system to trusted states. The Impact FD is suitable for systems that present node redundancy, heterogeneity of nodes, clustering feature, and allow a margin of failures which does not degrade the confidence in the system. The paper also includes a timer-based distributed algorithm which implements an Impact FD, as well as its proof of correctness, for systems whose links are lossy asynchronous or for those whose all (or some) links are eventually timely. Performance evaluation results, based on PlanetLab [1] traces, confirm the degree of flexible applicability of our failure detector and that, due to the accepted margin of failure, both failures and false suspicions are more tolerated when compared to traditional unreliable failure detectors
Impact: an Unreliable Failure Detector Based on Processes' Relevance and the Confidence Degree in the System
This technical report presents a new unreliable failure detector, called the Impact failure detector (FD) that, contrarily to the majority of traditional FDs, outputs a trust level value which expresses the degree of confidence in the system. An impact factor is assigned to each node and the trust level is equal to the sum of the impact factors of the nodes not suspected of failure. Moreover, a threshold parameter defines a lower bound value for the trust level, over which the confidence in the system is ensured. In particular, we defined a flexibility property that denotes the capacity of the Impact FD to tolerate a certain margin of failures or false suspicions, i.e., its capacity of considering different sets of responses that lead the system to trusted states. The Impact FD is suitable for systems that present node redundancy, heterogeneity of nodes, clustering feature, and allow a margin of failures which does not degrade the confidence in the system. The technical report also includes a timer-based distributed algorithm which implements a Impact FD, as well as its proof of correctness, for systems whose links are lossy asynchronous or for those whose all (or some) links are eventually timely. Performance evaluation results based on real PlanetLab traces confirm the degree of flexible applicability of our failure detector and, due to the accepted margin of failure, the both failures and false suspicions are more tolerated when compared to traditional unreliable failure detectors. We also show the equivalence of some classes of Impact FD in regard with Sigma and Omega classes, which are fundamental classes to circumvent the impossibility of consensus in asynchronous message-passing distributed systems
Developing a Methodology to Detect Partial Failures for Dynamic Systems
The purpose of this research is to develop a decision support system that can assist in detecting partial failures in dynamic systems such as Fire Control System Tracking Radar (TR) onboard Naval Ships. Partial failures do not necessarily shut down the system immediately but cause degradation of operational performance. Previous work has shown that experts in the field of failure detection, test point insertion and Built-In-Test Equipment (BITE) can provide useful input in detecting partial failures. Partial failures affect operational system performance and support costs, which can be significant. Often, however, partial failure detection consists of the estimations and opinions of the experts. This has not been addressed adequately in the literature. It is postulated that the approach developed in this research could be applied to maintain and monitor partial failure. The development of such a testing aid is the thrust of this research effort. Markov chains, k-out-of-n: G: system and critical path tracing techniques, among others are employed. Appropriate survey questionnaires are used for validation of the resulting test model. Application of previous test point insertion techniques are applied as a part of system comparison and assessment
Quantifying the reliability of fault classifiers
International audienceFault diagnostics problems can be formulated as classification tasks. Due to limited data and to uncertainty, classification algorithms are not perfectly accurate in practical applications. Maintenance decisions based on erroneous fault classifications result in inefficient resource allocations and/or operational disturbances. Thus, knowing the accuracy of classifiers is important to give confidence in the maintenance decisions. The average accuracy of a classifier on a test set of data patterns is often used as a measure of confidence in the performance of a specific classifier. However, the performance of a classifier can vary in different regions of the input data space. Several techniques have been proposed to quantify the reliability of a classifier at the level of individual classifications. Many of the proposed techniques are only applicable to specific classifiers, such as ensemble techniques and support vector machines. In this paper, we propose a meta approach based on the typicalness framework (Kolmogorov's concept of randomness), which is independent of the applied classifier. We apply the approach to a case of fault diagnosis in railway turnout systems and compare the results obtained with both extreme learning machines and echo state networks
Facial Landmarks Detection and Expression Recognition in the Dark
Facial landmark detection has been widely adopted for body language analysis and facial identification task. A variety of facial landmark detectors have been proposed in different approaches, such as AAM, AdaBoost, LBF and DPM. However, most detectors were trained and tested on high resolution images with controlled environments. Recent study has focused on robust landmark detectors and obtained increasing excellent performance under different poses and light conditions. However, it remains an open question about implementing facial landmark detection in extremely dark images. Our implementation is to build an application for facial expression analysis in extremely dark environments by landmarks. To address this problem, we explored different dark image enhancement methods to facilitate landmark detection. And we designed landmark correct- ness methods to evaluate landmarks’ localization. This step guarantees the accuracy of expression recognition. Then, we analyzed the feature extraction methods, such as HOG, polar coordinate and landmarks’ distance, and normalization methods for facial expression recognition. Compared with the existing facial expression recognition system, our system is more robust in the dark environment, and performs very well in detecting happy and surprising
An Approach to Counting Vehicles from Pre-Recorded Video Using Computer Algorithms
One of the fundamental sources of data for traffic analysis is vehicle counts, which can be conducted either by the traditional manual method or by automated means. Different agencies have guidelines for manual counting, but they are typically prepared for particular conditions. In the case of automated counting, different methods have been applied, but You Only Look Once (YOLO), a recently developed object detection model, presents new potential in automated vehicle counting. The first objective of this study was to formulate general guidelines for manual counting based on experience gained in the field. Another goal of this study was to develop a computer program for vehicle counting from pre-recorded video applying the YOLO model. The documented general guidelines provided in this project can be useful in acquiring the required standard and minimizing the cost of a manual counting project. The accuracy of the automated counting program was found to be about 90 percent for total daily counts, although most of that error was a consistent undercounting by automated counting
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
The importance of learning processes in transitioning small-scale irrigation schemes
Many small-scale irrigation schemes are dysfunctional, and learning,
innovation and evaluation are required to facilitate sustainable
transitions. Using quantitative and qualitative data from five irrigation
schemes in sub-Saharan Africa, we analyze how learning and
change arose in response to: soil monitoring tools, which triggered
a deep learning cycle; and agricultural innovation platforms, which
helped develop a social learning system. Knowledge generation
and innovation were driven by the incentives of more profitable
farming. Learning and change spread to farmers without the tools,
and learning at different levels resulted in extension and governance
stakeholders facilitating profound institutional change
- …