3,406 research outputs found
A Unifying Theory for Graph Transformation
The field of graph transformation studies the rule-based transformation of graphs. An important branch is the algebraic graph transformation tradition, in which approaches are defined and studied using the language of category theory. Most algebraic graph transformation approaches (such as DPO, SPO, SqPO, and AGREE) are opinionated about the local contexts that are allowed around matches for rules, and about how replacement in context should work exactly. The approaches also differ considerably in their underlying formal theories and their general expressiveness (e.g., not all frameworks allow duplication). This dissertation proposes an expressive algebraic graph transformation approach, called PBPO+, which is an adaptation of PBPO by Corradini et al. The central contribution is a proof that PBPO+ subsumes (under mild restrictions) DPO, SqPO, AGREE, and PBPO in the important categorical setting of quasitoposes. This result allows for a more unified study of graph transformation metatheory, methods, and tools. A concrete example of this is found in the second major contribution of this dissertation: a graph transformation termination method for PBPO+, based on decreasing interpretations, and defined for general categories. By applying the proposed encodings into PBPO+, this method can also be applied for DPO, SqPO, AGREE, and PBPO
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth
Dynamic programming on various graph decompositions is one of the most
fundamental techniques used in parameterized complexity. Unfortunately, even if
we consider concepts as simple as path or tree decompositions, such dynamic
programming uses space that is exponential in the decomposition's width, and
there are good reasons to believe that this is necessary. However, it has been
shown that in graphs of low treedepth it is possible to design algorithms which
achieve polynomial space complexity without requiring worse time complexity
than their counterparts working on tree decompositions of bounded width. Here,
treedepth is a graph parameter that, intuitively speaking, takes into account
both the depth and the width of a tree decomposition of the graph, rather than
the width alone.
Motivated by the above, we consider graphs that admit clique expressions with
bounded depth and label count, or equivalently, graphs of low shrubdepth (sd).
Here, sd is a bounded-depth analogue of cliquewidth, in the same way as td is a
bounded-depth analogue of treewidth. We show that also in this setting,
bounding the depth of the decomposition is a deciding factor for improving the
space complexity. Precisely, we prove that on -vertex graphs equipped with a
tree-model (a decomposition notion underlying sd) of depth and using
labels, we can solve
- Independent Set in time using
space;
- Max Cut in time using space; and
- Dominating Set in time using space via
a randomized algorithm.
We also establish a lower bound, conditional on a certain assumption about
the complexity of Longest Common Subsequence, which shows that at least in the
case of IS the exponent of the parametric factor in the time complexity has to
grow with if one wishes to keep the space complexity polynomial.Comment: Conference version to appear at the European Symposium on Algorithms
(ESA 2023
On the Inherent Anonymity of Gossiping
Detecting the source of a gossip is a critical issue, related to identifying
patient zero in an epidemic, or the origin of a rumor in a social network.
Although it is widely acknowledged that random and local gossip communications
make source identification difficult, there exists no general quantification of
the level of anonymity provided to the source. This paper presents a principled
method based on -differential privacy to analyze the inherent
source anonymity of gossiping for a large class of graphs. First, we quantify
the fundamental limit of source anonymity any gossip protocol can guarantee in
an arbitrary communication graph. In particular, our result indicates that when
the graph has poor connectivity, no gossip protocol can guarantee any
meaningful level of differential privacy. This prompted us to further analyze
graphs with controlled connectivity. We prove on these graphs that a large
class of gossip protocols, namely cobra walks, offers tangible differential
privacy guarantees to the source. In doing so, we introduce an original proof
technique based on the reduction of a gossip protocol to what we call a random
walk with probabilistic die out. This proof technique is of independent
interest to the gossip community and readily extends to other protocols
inherited from the security community, such as the Dandelion protocol.
Interestingly, our tight analysis precisely captures the trade-off between
dissemination time of a gossip protocol and its source anonymity.Comment: Full version of DISC2023 pape
Consistency-Checking Problems: A Gateway to Parameterized Sample Complexity
Recently, Brand, Ganian and Simonov introduced a parameterized refinement of
the classical PAC-learning sample complexity framework. A crucial outcome of
their investigation is that for a very wide range of learning problems, there
is a direct and provable correspondence between fixed-parameter
PAC-learnability (in the sample complexity setting) and the fixed-parameter
tractability of a corresponding "consistency checking" search problem (in the
setting of computational complexity). The latter can be seen as generalizations
of classical search problems where instead of receiving a single instance, one
receives multiple yes- and no-examples and is tasked with finding a solution
which is consistent with the provided examples.
Apart from a few initial results, consistency checking problems are almost
entirely unexplored from a parameterized complexity perspective. In this
article, we provide an overview of these problems and their connection to
parameterized sample complexity, with the primary aim of facilitating further
research in this direction. Afterwards, we establish the fixed-parameter
(in)-tractability for some of the arguably most natural consistency checking
problems on graphs, and show that their complexity-theoretic behavior is
surprisingly very different from that of classical decision problems. Our new
results cover consistency checking variants of problems as diverse as (k-)Path,
Matching, 2-Coloring, Independent Set and Dominating Set, among others
Recurrent segmentation meets block models in temporal networks
A popular approach to model interactions is to represent them as a network
with nodes being the agents and the interactions being the edges. Interactions
are often timestamped, which leads to having timestamped edges. Many real-world
temporal networks have a recurrent or possibly cyclic behaviour. For example,
social network activity may be heightened during certain hours of day. In this
paper, our main interest is to model recurrent activity in such temporal
networks. As a starting point we use stochastic block model, a popular choice
for modelling static networks, where nodes are split into groups. We extend
this model to temporal networks by modelling the edges with a Poisson process.
We make the parameters of the process dependent on time by segmenting the time
line into segments. To enforce the recurring activity we require that only
different set of parameters can be used, that is, several, not
necessarily consecutive, segments must share their parameters. We prove that
the searching for optimal blocks and segmentation is an NP-hard problem.
Consequently, we split the problem into 3 subproblems where we optimize blocks,
model parameters, and segmentation in turn while keeping the remaining
structures fixed. We propose an iterative algorithm that requires time per iteration, where and are the number of nodes and edges
in the network. We demonstrate experimentally that the number of required
iterations is typically low, the algorithm is able to discover the ground truth
from synthetic datasets, and show that certain real-world networks exhibit
recurrent behaviour as the likelihood does not deteriorate when is lowered
Mining Butterflies in Streaming Graphs
This thesis introduces two main-memory systems sGrapp and sGradd for performing the fundamental analytic tasks of biclique counting and concept drift detection over a streaming graph. A data-driven heuristic is used to architect the systems. To this end, initially, the growth patterns of bipartite streaming graphs are mined and the emergence principles of streaming motifs are discovered. Next, the discovered principles are (a) explained by a graph generator called sGrow; and (b) utilized to establish the requirements for efficient, effective, explainable, and interpretable management and processing of streams. sGrow is used to benchmark stream analytics, particularly in the case of concept drift detection.
sGrow displays robust realization of streaming growth patterns independent of initial conditions, scale and temporal characteristics, and model configurations. Extensive evaluations confirm the simultaneous effectiveness and efficiency of sGrapp and sGradd. sGrapp achieves mean absolute percentage error up to 0.05/0.14 for the cumulative butterfly count in streaming graphs with uniform/non-uniform temporal distribution and a processing throughput of 1.5 million data records per second. The throughput and estimation error of sGrapp are 160x higher and 0.02x lower than baselines. sGradd demonstrates an improving performance over time, achieves zero false detection rates when there is not any drift and when drift is already detected, and detects sequential drifts in zero to a few seconds after their occurrence regardless of drift intervals
生物情報ネットワークのグラフ理論に基づく解析法
京都大学新制・課程博士博士(情報学)甲第24730号情博第818号新制||情||138(附属図書館)京都大学大学院情報学研究科知能情報学専攻(主査)教授 阿久津 達也, 教授 山本 章博, 教授 岡部 寿男学位規則第4条第1項該当Doctor of InformaticsKyoto UniversityDFA
SNACKs: Leveraging Proofs of Sequential Work for Blockchain Light Clients
The success of blockchains has led to ever-growing ledgers that are stored by all participating full nodes. In contrast, light clients only store small amounts of blockchain-related data and rely on the mediation of full nodes when interacting with the ledger. A broader adoption of blockchains calls for protocols that make this interaction trustless.
We revisit the design of light-client blockchain protocols from the perspective of classical proof-system theory, and explain the role that proofs of sequential work (PoSWs) can play in it. To this end, we define a new primitive called succinct non-interactive argument of chain knowledge (SNACK), a non-interactive proof system that provides clear security guarantees to a verifier (a light client) even when interacting only with a single dishonest prover (a full node). We show how augmenting any blockchain with any graph-labeling PoSW (GL-PoSW) enables SNACK proofs for this blockchain. We also provide a unified and extended definition of GL-PoSWs covering all existing constructions, and describe two new variants. We then show how SNACKs can be used to construct light-client protocols, and highlight some deficiencies of existing designs, along with mitigations. Finally, we introduce incremental SNACKs which could provide a new approach to light mining
- …