16 research outputs found
First-Order Model-Checking in Random Graphs and Complex Networks
Complex networks are everywhere. They appear for example in the form of
biological networks, social networks, or computer networks and have been
studied extensively. Efficient algorithms to solve problems on complex networks
play a central role in today's society. Algorithmic meta-theorems show that
many problems can be solved efficiently. Since logic is a powerful tool to
model problems, it has been used to obtain very general meta-theorems. In this
work, we consider all problems definable in first-order logic and analyze which
properties of complex networks allow them to be solved efficiently.
The mathematical tool to describe complex networks are random graph models.
We define a property of random graph models called
-power-law-boundedness. Roughly speaking, a random graph is
-power-law-bounded if it does not admit strong clustering and its
degree sequence is bounded by a power-law distribution with exponent at least
(i.e. the fraction of vertices with degree is roughly
).
We solve the first-order model-checking problem (parameterized by the length
of the formula) in almost linear FPT time on random graph models satisfying
this property with . This means in particular that one can solve
every problem expressible in first-order logic in almost linear expected time
on these random graph models. This includes for example preferential attachment
graphs, Chung-Lu graphs, configuration graphs, and sparse Erd\H{o}s-R\'{e}nyi
graphs. Our results match known hardness results and generalize previous
tractability results on this topic
Polyhedral Combinatorics, Complexity & Algorithms for k-Clubs in Graphs
A k-club is a distance-based graph-theoretic generalization of clique, originally introduced to model cohesive subgroups in social network analysis. The k-clubs represent low diameter clusters in graphs and are suitable for various graph-based data mining applications. Unlike cliques, the k-club model is nonhereditary, meaning every subset of a k-club is not necessarily a k-club. This imposes significant challenges in developing theory and algorithms for optimization problems associated with k-clubs.We settle an open problem establishing the intractability of testing inclusion-wise maximality of k-clubs for fixed k>=2. This result is in contrast to polynomial-time verifiability of maximal cliques, and is a direct consequence of k-clubs' nonhereditary nature. A class of graphs for which this problem is polynomial-time solvable is also identified. We propose a distance coloring based upper-bounding scheme and a bounded enumeration based lower-bounding routine and employ them in a combinatorial branch-and-bound algorithm for finding a maximum k-club. Computational results on graphs with up to 200 vertices are also provided.The 2-club polytope of a graph is studied and a new family of facet inducing inequalities for this polytope is discovered. This family of facets strictly contains all known nontrivial facets of the 2-club polytope as special cases, and identifies previously unknown facets of this polytope. The separation complexity of these newly discovered facets is proved to be NP-complete and it is shown that the 2-club polytope of trees can be completely described by the collection of these facets along with the nonnegativity constraints.We also studied the maximum 2-club problem under uncertainty. Given a random graph subject to probabilistic edge failures, we are interested in finding a large "risk-averse" 2-club. Here, risk-aversion is achieved via modeling the loss in 2-club property due to edge failures, as random loss, which is a function of the decision variables and uncertain parameters. Conditional Value-at-Risk (CVaR) is used as a quantitative measure of risk that is constrained in the model. Benders' decomposition scheme is utilized to develop a new decomposition algorithm for solving the CVaR constrainedmaximum 2-club problem. A preliminary experiment is also conducted to compare the computational performance of the developed algorithm with our extension of an existing algorithm from the literature.Industrial Engineering & Managemen
Proceedings of the 8th Cologne-Twente Workshop on Graphs and Combinatorial Optimization
International audienceThe Cologne-Twente Workshop (CTW) on Graphs and Combinatorial Optimization started off as a series of workshops organized bi-annually by either Köln University or Twente University. As its importance grew over time, it re-centered its geographical focus by including northern Italy (CTW04 in Menaggio, on the lake Como and CTW08 in Gargnano, on the Garda lake). This year, CTW (in its eighth edition) will be staged in France for the first time: more precisely in the heart of Paris, at the Conservatoire National dâArts et MĂ©tiers (CNAM), between 2nd and 4th June 2009, by a mixed organizing committee with members from LIX, Ecole Polytechnique and CEDRIC, CNAM
Classical and quantum sublinear algorithms
This thesis investigates the capabilities of classical and quantum sublinear algorithms through the lens of complexity theory. The formal classification of problems between âtractableâ (by constructing efficient algorithms that solve them) and âintractableâ (by proving no efficient algorithm can) is among the most fruitful lines of work in theoretical computer science, which includes, amongst an abundance of fundamental results and open problems, the notorious P vs. NP question.
This particular incarnation of the decision-versus-verification question stems from a choice of computational model: polynomial-time Turing machines. It is far from the only model worthy of investigation, however; indeed, measuring time up to polynomial factors is often too âcoarseâ for practical applications. We focus on quantum computation, a more complete model of physically realisable computation where quantum mechanical phenomena (such as interference and entanglement) may be used as computational resources; and sublinear algorithms, a formalisation of ultra-fast computation where merely reading or storing the entire input is impractical, e.g., when processing massive datasets such as social networks or large databases.
We begin our investigation by studying structural properties of local algorithms, a large class of sublinear algorithms that includes property testers and is characterised by the inability to even see most of the input. We prove that, in this setting, queries â the main complexity measure â can be replaced with random samples. Applying this
transformation yields, among other results, the state-of-the-art query lower bound for relaxed local decoders.
Focusing our attention onto property testers, we begin to chart the complexityïżœtheoretic landscape arising from the classical vs. quantum and decision vs. verification questions in testing. We show that quantum hardware and communication with a powerful but untrusted prover are âorthogonalâ resources, so that one cannot be substituted for the other. This implies all of the possible separations among the
analogues of QMA, MA and BQP in the property-testing setting.
We conclude with a study of zero-knowledge for (classical) streaming algorithms, which receive one-pass access to the entirety of their input but only have sublinear space. Inspired by cryptographic tools, we construct commitment protocols that are unconditionally secure in the streaming model and can be leveraged to obtain zero-knowledge streaming interactive proofs â and, in particular, show that zero-knowledge is achievable in this model
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum