1,674 research outputs found
Degree Associated Edge Reconstruction Number of Graphs with Regular Pruned Graph
An ecard of a graph is a subgraph formed by deleting an edge. A da-ecard specifies the degree of the deleted edge along with the ecard. The degree associated edge reconstruction number of a graph is the minimum number of da-ecards that uniquely determines The adversary degree associated edge reconstruction number of a graph is the minimum number such that every collection of da-ecards of uniquely determines The maximal subgraph without end vertices of a graph which is not a tree is the pruned graph of It is shown that of complete multipartite graphs and some connected graphs with regular pruned graph is or We also determine and of corona product of standard graphs
A Generic Framework for Engineering Graph Canonization Algorithms
The state-of-the-art tools for practical graph canonization are all based on
the individualization-refinement paradigm, and their difference is primarily in
the choice of heuristics they include and in the actual tool implementation. It
is thus not possible to make a direct comparison of how individual algorithmic
ideas affect the performance on different graph classes.
We present an algorithmic software framework that facilitates implementation
of heuristics as independent extensions to a common core algorithm. It
therefore becomes easy to perform a detailed comparison of the performance and
behaviour of different algorithmic ideas. Implementations are provided of a
range of algorithms for tree traversal, target cell selection, and node
invariant, including choices from the literature and new variations. The
framework readily supports extraction and visualization of detailed data from
separate algorithm executions for subsequent analysis and development of new
heuristics.
Using collections of different graph classes we investigate the effect of
varying the selections of heuristics, often revealing exactly which individual
algorithmic choice is responsible for particularly good or bad performance. On
several benchmark collections, including a newly proposed class of difficult
instances, we additionally find that our implementation performs better than
the current state-of-the-art tools
The planted -factor problem
We consider the problem of recovering an unknown -factor, hidden in a
weighted random graph. For this is the planted matching problem, while
the case is closely related to the planted travelling salesman problem.
The inference problem is solved by exploiting the information arising from the
use of two different distributions for the weights on the edges inside and
outside the planted sub-graph. We argue that, in the large size limit, a phase
transition can appear between a full and a partial recovery phase as function
of the signal-to-noise ratio. We give a criterion for the location of the
transition.Comment: 21 pages, 4 figure
Applications of Coding Theory to Massive Multiple Access and Big Data Problems
The broad theme of this dissertation is design of schemes that admit iterative algorithms
with low computational complexity to some new problems arising in massive
multiple access and big data. Although bipartite Tanner graphs and low-complexity
iterative algorithms such as peeling and message passing decoders are very popular
in the channel coding literature they are not as widely used in the respective areas
of study and this dissertation serves as an important step in that direction to bridge
that gap. The contributions of this dissertation can be categorized into the following
three parts.
In the first part of this dissertation, a timely and interesting multiple access
problem for a massive number of uncoordinated devices is considered wherein the
base station is interested only in recovering the list of messages without regard to the
identity of the respective sources. A coding scheme with polynomial encoding and
decoding complexities is proposed for this problem, the two main features of which
are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple
access channel and (ii) successive interference cancellation decoder. The proposed
coding scheme not only improves on the performance of the previously best known
coding scheme by â 13 dB but is only â 6 dB away from the random Gaussian
coding information rate.
In the second part construction-D lattices are constructed where the underlying
linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC)
codes with uniform left and right degrees. It is shown that the proposed
lattices achieve the Poltyrev limit under multistage belief propagation decoding.
Leveraging this result lattice codes constructed from these lattices are applied to the
three user symmetric interference channel. For channel gains within 0.39 dB from
the very strong interference regime, the proposed lattice coding scheme with the
iterative belief propagation decoder, for target error rates of â 10^-5, is only 2:6 dB
away the Shannon limit.
The third part focuses on support recovery in compressed sensing and the nonadaptive
group testing (GT) problems. Prior to this work, sensing schemes based on
left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling
decoder were proposed for the above problems. These schemes require O(K logN)
and Ω(K logK logN) measurements respectively to recover the sparse signal with
high probability (w.h.p), where N, K denote the dimension and sparsity of the signal
respectively (K (double backward arrow) N). Also the number of measurements required to recover
at least (1 - âŹ) fraction of defective items w.h.p (approximate GT) is shown to be
cvâŹ_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and-
right regular bipartite graph based sensing schemes are analyzed. It is shown
that this design strategy enables to achieve superior and sharper results. For the
support recovery problem, the number of measurements is reduced to the optimal
lower bound of
Ω (K log N/K). Similarly for the approximate GT, proposed scheme
only requires câŹ_K log N/
K measurements. For the probabilistic GT, proposed scheme
requires (K logK log vN/
K) measurements which is only log K factor away from the
best known lower bound of Ω (K log N/
K). Apart from the asymptotic regime, the proposed
schemes also demonstrate significant improvement in the required number of
measurements for finite values of K, N
Applications of Coding Theory to Massive Multiple Access and Big Data Problems
The broad theme of this dissertation is design of schemes that admit iterative algorithms
with low computational complexity to some new problems arising in massive
multiple access and big data. Although bipartite Tanner graphs and low-complexity
iterative algorithms such as peeling and message passing decoders are very popular
in the channel coding literature they are not as widely used in the respective areas
of study and this dissertation serves as an important step in that direction to bridge
that gap. The contributions of this dissertation can be categorized into the following
three parts.
In the first part of this dissertation, a timely and interesting multiple access
problem for a massive number of uncoordinated devices is considered wherein the
base station is interested only in recovering the list of messages without regard to the
identity of the respective sources. A coding scheme with polynomial encoding and
decoding complexities is proposed for this problem, the two main features of which
are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple
access channel and (ii) successive interference cancellation decoder. The proposed
coding scheme not only improves on the performance of the previously best known
coding scheme by â 13 dB but is only â 6 dB away from the random Gaussian
coding information rate.
In the second part construction-D lattices are constructed where the underlying
linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC)
codes with uniform left and right degrees. It is shown that the proposed
lattices achieve the Poltyrev limit under multistage belief propagation decoding.
Leveraging this result lattice codes constructed from these lattices are applied to the
three user symmetric interference channel. For channel gains within 0.39 dB from
the very strong interference regime, the proposed lattice coding scheme with the
iterative belief propagation decoder, for target error rates of â 10^-5, is only 2:6 dB
away the Shannon limit.
The third part focuses on support recovery in compressed sensing and the nonadaptive
group testing (GT) problems. Prior to this work, sensing schemes based on
left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling
decoder were proposed for the above problems. These schemes require O(K logN)
and Ω(K logK logN) measurements respectively to recover the sparse signal with
high probability (w.h.p), where N, K denote the dimension and sparsity of the signal
respectively (K (double backward arrow) N). Also the number of measurements required to recover
at least (1 - âŹ) fraction of defective items w.h.p (approximate GT) is shown to be
cvâŹ_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and-
right regular bipartite graph based sensing schemes are analyzed. It is shown
that this design strategy enables to achieve superior and sharper results. For the
support recovery problem, the number of measurements is reduced to the optimal
lower bound of
Ω (K log N/K). Similarly for the approximate GT, proposed scheme
only requires câŹ_K log N/
K measurements. For the probabilistic GT, proposed scheme
requires (K logK log vN/
K) measurements which is only log K factor away from the
best known lower bound of Ω (K log N/
K). Apart from the asymptotic regime, the proposed
schemes also demonstrate significant improvement in the required number of
measurements for finite values of K, N
Network reconstruction from infection cascades
International audienc
- âŠ