61,202 research outputs found
Self-Modeling Based Diagnosis of Software-Defined Networks
Networks built using SDN (Software-Defined Networks) and NFV (Network
Functions Virtualization) approaches are expected to face several challenges
such as scalability, robustness and resiliency. In this paper, we propose a
self-modeling based diagnosis to enable resilient networks in the context of
SDN and NFV. We focus on solving two major problems: On the one hand, we lack
today of a model or template that describes the managed elements in the context
of SDN and NFV. On the other hand, the highly dynamic networks enabled by the
softwarisation require the generation at runtime of a diagnosis model from
which the root causes can be identified. In this paper, we propose finer
granular templates that do not only model network nodes but also their
sub-components for a more detailed diagnosis suitable in the SDN and NFV
context. In addition, we specify and validate a self-modeling based diagnosis
using Bayesian Networks. This approach differs from the state of the art in the
discovery of network and service dependencies at run-time and the building of
the diagnosis model of any SDN infrastructure using our templates
A probabilistic model for information and sensor validation
This paper develops a new theory and model for information and sensor validation. The model represents relationships between variables using Bayesian networks and utilizes probabilistic propagation to estimate the expected values of variables. If the estimated value of a variable differs from the actual value, an apparent fault is detected. The fault is only apparent since it may be that the estimated value is itself based on faulty data. The theory extends our understanding of when it is possible to isolate real faults from potential faults and supports the development of an algorithm that is capable of isolating real faults without deferring the problem to the use of expert provided domain-specific rules. To enable practical adoption for real-time processes, an any time version of the algorithm is developed, that, unlike most other algorithms, is capable of returning improving assessments of the validity of the sensors as it accumulates more evidence with time. The developed model is tested by applying it to the validation of temperature sensors during the start-up phase of a gas turbine when conditions are not stable; a problem that is known to be challenging. The paper concludes with a discussion of the practical applicability and scalability of the model
Nonlinear observation in fuel cell systems: a comparison between disturbance estimation and High-Order Sliding-Mode techniques
© . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper compares two Nonlinear Distributed Parameter Observers (NDPO) for the observation of a Proton Exchange Membrane Fuel Cell (PEMFC). Both NDPOs are based on the discretisation of distributed parameters models and they are used to estimate the state profile of gas concentrations in the anode and cathode gas channels of the PEMFC, giving detailed information about the internal conditions of the system. The reaction and water transport flow rates from the membrane to the channels are uncertainties of the observation problem and they are estimated throughout all the length of the PEMFC without the use of additional sensors. The first observation approach is a Nonlinear Disturbance Observer (NDOB) for the estimation of the disturbances in the NDPO. In the second approach, a novel implementation of a High-Order Sliding-Mode (HOSM) observer is developed to estimate the true value of the states as well as the reaction terms. The proposed observers are tested and compared through a simulation example at different operating points and their performance and robustness is analysed over a given case study, the New European Driving Cycle.Peer ReviewedPostprint (author's final draft
Recommended from our members
Rules and principles in cognitive diagnoses
Cognitive simulation is concerned with constructing process models of human cognitive behavior. Our work on the ACM system (Automated Cognitive Modeler) is an attempt to automate this process. The basic assumption is that all goal-oriented cognitive behavior involves search through some problem space. Within this framework, the task of cognitive diagnosis is to identify the problem space in which the subject is operating, identify solution paths used by the subject, and find conditions on the operators that explain those solution paths and that predict the subject's behavior on new problems. The work presented in this paper uses techniques from machine learning to automate the tasks of finding solution paths and operator conditions. We apply this method to the domain of multi-column subtraction and present results that demonstrate ACM's ability to model incorrect subtraction strategies. Finally, we discuss the difference between procedural bugs and misconceptions, proposing that errors due to misconceptions can be viewed as violations of principles for the task domain
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
- …