15,367 research outputs found
On the Computational Complexity of Vertex Integrity and Component Order Connectivity
The Weighted Vertex Integrity (wVI) problem takes as input an -vertex
graph , a weight function , and an integer . The
task is to decide if there exists a set such that the weight
of plus the weight of a heaviest component of is at most . Among
other results, we prove that:
(1) wVI is NP-complete on co-comparability graphs, even if each vertex has
weight ;
(2) wVI can be solved in time;
(3) wVI admits a kernel with at most vertices.
Result (1) refutes a conjecture by Ray and Deogun and answers an open
question by Ray et al. It also complements a result by Kratsch et al., stating
that the unweighted version of the problem can be solved in polynomial time on
co-comparability graphs of bounded dimension, provided that an intersection
model of the input graph is given as part of the input.
An instance of the Weighted Component Order Connectivity (wCOC) problem
consists of an -vertex graph , a weight function ,
and two integers and , and the task is to decide if there exists a set
such that the weight of is at most and the weight of
a heaviest component of is at most . In some sense, the wCOC problem
can be seen as a refined version of the wVI problem. We prove, among other
results, that:
(4) wCOC can be solved in time on interval graphs,
while the unweighted version can be solved in time on this graph
class;
(5) wCOC is W[1]-hard on split graphs when parameterized by or by ;
(6) wCOC can be solved in time;
(7) wCOC admits a kernel with at most vertices.
We also show that result (6) is essentially tight by proving that wCOC cannot
be solved in time, unless the ETH fails.Comment: A preliminary version of this paper already appeared in the
conference proceedings of ISAAC 201
Architecture-based Qualitative Risk Analysis for Availability of IT Infrastructures
An IT risk assessment must deliver the best possible quality of results in a time-effective way. Organisations are used to customise the general-purpose standard risk assessment methods in a way that can satisfy their requirements. In this paper we present the QualTD Model and method, which is meant to be employed together with standard risk assessment methods for the qualitative assessment of availability risks of IT architectures, or parts of them. The QualTD Model is based on our previous quantitative model, but geared to industrial practice since it does not require quantitative data which is often too costly to acquire. We validate the model and method in a real-world case by performing a risk assessment on the authentication and authorisation system of a large multinational company and by evaluating the results w.r.t. the goals of the stakeholders of the system. We also perform a review of the most popular standard risk assessment methods and an analysis of which one can be actually integrated with our QualTD Model
An Automated Social Graph De-anonymization Technique
We present a generic and automated approach to re-identifying nodes in
anonymized social networks which enables novel anonymization techniques to be
quickly evaluated. It uses machine learning (decision forests) to matching
pairs of nodes in disparate anonymized sub-graphs. The technique uncovers
artefacts and invariants of any black-box anonymization scheme from a small set
of examples. Despite a high degree of automation, classification succeeds with
significant true positive rates even when small false positive rates are
sought. Our evaluation uses publicly available real world datasets to study the
performance of our approach against real-world anonymization strategies, namely
the schemes used to protect datasets of The Data for Development (D4D)
Challenge. We show that the technique is effective even when only small numbers
of samples are used for training. Further, since it detects weaknesses in the
black-box anonymization scheme it can re-identify nodes in one social network
when trained on another.Comment: 12 page
Characterization of complex networks: A survey of measurements
Each complex network (or class of networks) presents specific topological
features which characterize its connectivity and highly influence the dynamics
of processes executed on the network. The analysis, discrimination, and
synthesis of complex networks therefore rely on the use of measurements capable
of expressing the most relevant topological features. This article presents a
survey of such measurements. It includes general considerations about complex
network characterization, a brief review of the principal models, and the
presentation of the main existing measurements. Important related issues
covered in this work comprise the representation of the evolution of complex
networks in terms of trajectories in several measurement spaces, the analysis
of the correlations between some of the most traditional measurements,
perturbation analysis, as well as the use of multivariate statistics for
feature selection and network classification. Depending on the network and the
analysis task one has in mind, a specific set of features may be chosen. It is
hoped that the present survey will help the proper application and
interpretation of measurements.Comment: A working manuscript with 78 pages, 32 figures. Suggestions of
measurements for inclusion are welcomed by the author
Learning-based Analysis on the Exploitability of Security Vulnerabilities
The purpose of this thesis is to develop a tool that uses machine learning techniques to make predictions about whether or not a given vulnerability will be exploited. Such a tool could help organizations such as electric utilities to prioritize their security patching operations. Three different models, based on a deep neural network, a random forest, and a support vector machine respectively, are designed and implemented. Training data for these models is compiled from a variety of sources, including the National Vulnerability Database published by NIST and the Exploit Database published by Offensive Security. Extensive experiments are conducted, including testing the accuracy of each model, dynamically training the models on a rolling window of training data, and filtering the training data by various features. Of the chosen models, the deep neural network and the support vector machine show the highest accuracy (approximately 94% and 93%, respectively), and could be developed by future researchers into an effective tool for vulnerability analysis
- …