10 research outputs found
Exact Inference Techniques for the Analysis of Bayesian Attack Graphs
Attack graphs are a powerful tool for security risk assessment by analysing
network vulnerabilities and the paths attackers can use to compromise network
resources. The uncertainty about the attacker's behaviour makes Bayesian
networks suitable to model attack graphs to perform static and dynamic
analysis. Previous approaches have focused on the formalization of attack
graphs into a Bayesian model rather than proposing mechanisms for their
analysis. In this paper we propose to use efficient algorithms to make exact
inference in Bayesian attack graphs, enabling the static and dynamic network
risk assessments. To support the validity of our approach we have performed an
extensive experimental evaluation on synthetic Bayesian attack graphs with
different topologies, showing the computational advantages in terms of time and
memory use of the proposed techniques when compared to existing approaches.Comment: 14 pages, 15 figure
Recommended from our members
Examination of Bayesian belief network for safety assessment of nuclear computer-based systems
We report here on a continuation of work on the Bayesian Belief Network (BBN)model described in [Fenton, Littlewood et al. 1998]. As explained in the previous deliverable, our model concerns one part of the safety assessment task for computer and software based nuclear systems. We have produced a first complete, functioning version of our BBN model by eliciting a large numerical node probability table (NPT) required for our ‘Design Process Performance’ variable. The requirement for such large numerical NPTs poses some difficult questions about how, in general, large NPTs should be elicited from domain experts. We report about the methods we have devised to support the expert in building and validating a BBN. On the one hand, we have proceeded by eliciting approximate descriptions of the expert’s probabilistic beliefs, in terms of properties like stochastic orderings among distributions; on the other hand, we have explored ways of presenting to the expert visual and algebraic descriptions of relations among variables in the BBN, to assist the expert in an ongoing assessment of the validity of the BBN
Efficient Attack Graph Analysis through Approximate Inference
Attack graphs provide compact representations of the attack paths that an
attacker can follow to compromise network resources by analysing network
vulnerabilities and topology. These representations are a powerful tool for
security risk assessment. Bayesian inference on attack graphs enables the
estimation of the risk of compromise to the system's components given their
vulnerabilities and interconnections, and accounts for multi-step attacks
spreading through the system. Whilst static analysis considers the risk posture
at rest, dynamic analysis also accounts for evidence of compromise, e.g. from
SIEM software or forensic investigation. However, in this context, exact
Bayesian inference techniques do not scale well. In this paper we show how
Loopy Belief Propagation - an approximate inference technique - can be applied
to attack graphs, and that it scales linearly in the number of nodes for both
static and dynamic analysis, making such analyses viable for larger networks.
We experiment with different topologies and network clustering on synthetic
Bayesian attack graphs with thousands of nodes to show that the algorithm's
accuracy is acceptable and converge to a stable solution. We compare sequential
and parallel versions of Loopy Belief Propagation with exact inference
techniques for both static and dynamic analysis, showing the advantages of
approximate inference techniques to scale to larger attack graphs.Comment: 30 pages, 14 figure
Data quality maintenance in Data Integration Systems
A Data Integration System (DIS) is an information system that integrates data from a set of heterogeneous and autonomous information sources and provides it to users. Quality in these systems consists of various factors that are measured in data. Some of the usually considered ones are completeness, accuracy, accessibility, freshness, availability. In a DIS, quality factors are associated to the sources, to the extracted and transformed information, and to the information provided by the DIS to the user. At the same time, the user has the possibility of posing quality requirements associated to his data requirements. DIS Quality is considered as better, the nearer it is to the user quality requirements. DIS quality depends on data sources quality, on data transformations and on quality required by users. Therefore, DIS quality is a property that varies in function of the variations of these three other properties. The general goal of this thesis is to provide mechanisms for maintaining DIS quality at a level that satisfies the user quality requirements, minimizing the modifications to the system that are generated by quality changes.
The proposal of this thesis allows constructing and maintaining a DIS that is tolerant to quality changes. This means that the DIS is constructed taking into account previsions of quality behavior, such that if changes occur according to these previsions the system is not affected at all by them. These previsions are provided by models of quality behavior of DIS data, which must be maintained up to date. With this strategy, the DIS is affected only when quality behavior models change, instead of being affected each time there is a quality variation in the system. The thesis has a probabilistic approach, which allows modeling the behavior of the quality factors at the sources and at the DIS, allows the users to state flexible quality requirements (using probabilities), and provides tools, such as certainty, mathematical expectation, etc., that help to decide which quality changes are relevant to the DIS quality. The probabilistic models are monitored in order to detect source quality changes, strategy that allows detecting changes on quality behavior and not only punctual quality changes. We propose to monitor also other DIS properties that affect its quality, and for each of these changes decide if they affect the behavior of DIS quality, taking into account DIS quality models. Finally, the probabilistic approach is also applied at the moment of determining actions to take in order to improve DIS quality. For the interpretation of DIS situation we propose to use statistics, which include, in particular, the history of the quality models