20 research outputs found
Determining Distributions of Security Means for WSNs based on the Model of a Neighbourhood Watch
Neighbourhood watch is a concept that allows a community to distribute a
complex security task in between all members. Members of the community carry
out individual security tasks to contribute to the overall security of it. It
reduces the workload of a particular individual while securing all members and
allowing them to carry out a multitude of security tasks. Wireless sensor
networks (WSNs) are composed of resource-constraint independent battery driven
computers as nodes communicating wirelessly. Security in WSNs is essential.
Without sufficient security, an attacker is able to eavesdrop the
communication, tamper monitoring results or deny critical nodes providing their
service in a way to cut off larger network parts. The resource-constraint
nature of sensor nodes prevents them from running full-fledged security
protocols. Instead, it is necessary to assess the most significant security
threats and implement specialised protocols. A neighbourhood-watch inspired
distributed security scheme for WSNs has been introduced by Langend\"orfer. Its
goal is to increase the variety of attacks a WSN can fend off. A framework of
such complexity has to be designed in multiple steps. Here, we introduce an
approach to determine distributions of security means on large-scale static
homogeneous WSNs. Therefore, we model WSNs as undirected graphs in which two
nodes connected iff they are in transmission range. The framework aims to
partition the graph into distinct security means resulting in the targeted
distribution. The underlying problems turn out to be NP hard and we attempt to
solve them using linear programs (LPs). To evaluate the computability of the
LPs, we generate large numbers of random {\lambda}-precision unit disk graphs
(UDGs) as representation of WSNs. For this purpose, we introduce a novel
{\lambda}-precision UDG generator to model WSNs with a minimal distance in
between nodes
Quantum Algorithms for Graph Coloring and other Partitioning, Covering, and Packing Problems
Let U be a universe on n elements, let k be a positive integer, and let F be
a family of (implicitly defined) subsets of U. We consider the problems of
partitioning U into k sets from F, covering U with k sets from F, and packing k
non-intersecting sets from F into U. Classically, these problems can be solved
via inclusion-exclusion in O*(2^n) time [BjorklundHK09]. Quantumly, there are
faster algorithms for graph coloring with running time O(1.9140^n) [ShimizuM22]
and for Set Cover with a small number of sets with running time O(1.7274^n
|F|^O(1)) [AmbainisBIKPV19]. In this paper, we give a quantum speedup for Set
Partition, Set Cover, and Set Packing whenever there is a classical enumeration
algorithm that lends itself to a quadratic quantum speedup, which, for any
subinstance on a subset X of U, enumerates at least one member of a
k-partition, k-cover, or k-packing (if one exists) restricted to (or projected
onto, in the case of k-cover) the set X in O*(c^{|X|}) time with c<2.
Our bounded-error quantum algorithm runs in O*((2+c)^(n/2)) for Set
Partition, Set Cover, and Set Packing. When c<=1.147899, our algorithm is
slightly faster than O*((2+c)^(n/2)); when c approaches 1, it matches the
running time of [AmbainisBIKPV19] for Set Cover when |F| is subexponential in
n.
For Graph Coloring, we further improve the running time to O(1.7956^n) by
leveraging faster algorithms for coloring with a small number of colors to
better balance our divide-and-conquer steps. For Domatic Number, we obtain a
O((2-\epsilon)^n) running time for some \epsilon>0
Meta-Kernelization with Structural Parameters
Meta-kernelization theorems are general results that provide polynomial
kernels for large classes of parameterized problems. The known
meta-kernelization theorems, in particular the results of Bodlaender et al.
(FOCS'09) and of Fomin et al. (FOCS'10), apply to optimization problems
parameterized by solution size. We present the first meta-kernelization
theorems that use a structural parameters of the input and not the solution
size. Let C be a graph class. We define the C-cover number of a graph to be a
the smallest number of modules the vertex set can be partitioned into, such
that each module induces a subgraph that belongs to the class C. We show that
each graph problem that can be expressed in Monadic Second Order (MSO) logic
has a polynomial kernel with a linear number of vertices when parameterized by
the C-cover number for any fixed class C of bounded rank-width (or
equivalently, of bounded clique-width, or bounded Boolean width). Many graph
problems such as Independent Dominating Set, c-Coloring, and c-Domatic Number
are covered by this meta-kernelization result. Our second result applies to MSO
expressible optimization problems, such as Minimum Vertex Cover, Minimum
Dominating Set, and Maximum Clique. We show that these problems admit a
polynomial annotated kernel with a linear number of vertices
Contracting edges to destroy a pattern: A complexity study
Given a graph G and an integer k, the objective of the -Contraction
problem is to check whether there exists at most k edges in G such that
contracting them in G results in a graph satisfying the property . We
investigate the problem where is `H-free' (without any induced copies of
H). It is trivial that H-free Contraction is polynomial-time solvable if H is a
complete graph of at most two vertices. We prove that, in all other cases, the
problem is NP-complete. We then investigate the fixed-parameter tractability of
these problems. We prove that whenever H is a tree, except for seven trees,
H-free Contraction is W[2]-hard. This result along with the known results
leaves behind three unknown cases among trees.Comment: 30 pages, 10 figures, a short version is accepted to FCT 202
Courcelle\u27s Theorem: Overview and Applications
Courcelle\u27s Theorem states that any graph property expressible in monadic second order logic can be decidedin O(f(k)n) for graphs of treewidth k. This paper gives a broad overview of how this theorem is proved and outlines tools available to help express graph properties in monadic second order logic
Grouped Domination Parameterized by Vertex Cover, Twin Cover, and Beyond
A dominating set of graph is called an -grouped dominating set if
can be partitioned into such that the size of each
unit is and the subgraph of induced by is connected. The
concept of -grouped dominating sets generalizes several well-studied
variants of dominating sets with requirements for connected component sizes,
such as the ordinary dominating sets (), paired dominating sets (),
and connected dominating sets ( is arbitrary and ). In this paper, we
investigate the computational complexity of -Grouped Dominating Set, which
is the problem of deciding whether a given graph has an -grouped dominating
set with at most units. For general , the problem is hard to solve in
various senses because the hardness of the connected dominating set is
inherited. We thus focus on the case in which is a constant or a parameter,
but we see that the problem for every fixed is still hard to solve. From
the hardness, we consider the parameterized complexity concerning well-studied
graph structural parameters. We first see that it is fixed-parameter tractable
for and treewidth, because the condition of -grouped domination for a
constant can be represented as monadic second-order logic (mso2). This is
good news, but the running time is not practical. We then design an
-time algorithm for general
, where is the twin cover number, which is a parameter between
vertex cover number and clique-width. For paired dominating set and trio
dominating set, i.e., , we can speed up the algorithm, whose
running time becomes . We further argue the relationship
between FPT results and graph parameters, which draws the parameterized
complexity landscape of -Grouped Dominating Set.Comment: 23 pages, 6 figure
Learning Combinatorial Node Labeling Algorithms
We present a graph neural network to learn graph coloring heuristics using
reinforcement learning. Our learned deterministic heuristics give better
solutions than classical degree-based greedy heuristics and only take seconds
to evaluate on graphs with tens of thousands of vertices. As our approach is
based on policy-gradients, it also learns a probabilistic policy as well. These
probabilistic policies outperform all greedy coloring baselines and a machine
learning baseline. Our approach generalizes several previous machine-learning
frameworks, which applied to problems like minimum vertex cover. We also
demonstrate that our approach outperforms two greedy heuristics on minimum
vertex cover