25 research outputs found
Wavelet analysis on symbolic sequences and two-fold de Bruijn sequences
The concept of symbolic sequences play important role in study of complex
systems. In the work we are interested in ultrametric structure of the set of
cyclic sequences naturally arising in theory of dynamical systems. Aimed at
construction of analytic and numerical methods for investigation of clusters we
introduce operator language on the space of symbolic sequences and propose an
approach based on wavelet analysis for study of the cluster hierarchy. The
analytic power of the approach is demonstrated by derivation of a formula for
counting of {\it two-fold de Bruijn sequences}, the extension of the notion of
de Bruijn sequences. Possible advantages of the developed description is also
discussed in context of applied
Distributed Approximation of Maximum Independent Set and Maximum Matching
We present a simple distributed -approximation algorithm for maximum
weight independent set (MaxIS) in the model which completes
in rounds, where is the maximum
degree, is the number of rounds needed to compute a maximal
independent set (MIS) on , and is the maximum weight of a node. %Whether
our algorithm is randomized or deterministic depends on the \texttt{MIS}
algorithm used as a black-box.
Plugging in the best known algorithm for MIS gives a randomized solution in
rounds, where is the number of nodes.
We also present a deterministic -round algorithm based
on coloring.
We then show how to use our MaxIS approximation algorithms to compute a
-approximation for maximum weight matching without incurring any additional
round penalty in the model. We use a known reduction for
simulating algorithms on the line graph while incurring congestion, but we show
our algorithm is part of a broad family of \emph{local aggregation algorithms}
for which we describe a mechanism that allows the simulation to run in the
model without an additional overhead.
Next, we show that for maximum weight matching, relaxing the approximation
factor to () allows us to devise a distributed algorithm
requiring rounds for any constant
. For the unweighted case, we can even obtain a
-approximation in this number of rounds. These algorithms are
the first to achieve the provably optimal round complexity with respect to
dependency on
On local search and LP and SDP relaxations for k-Set Packing
Set packing is a fundamental problem that generalises some well-known
combinatorial optimization problems and knows a lot of applications. It is
equivalent to hypergraph matching and it is strongly related to the maximum
independent set problem. In this thesis we study the k-set packing problem
where given a universe U and a collection C of subsets over U, each of
cardinality k, one needs to find the maximum collection of mutually disjoint
subsets. Local search techniques have proved to be successful in the search for
approximation algorithms, both for the unweighted and the weighted version of
the problem where every subset in C is associated with a weight and the
objective is to maximise the sum of the weights. We make a survey of these
approaches and give some background and intuition behind them. In particular,
we simplify the algebraic proof of the main lemma for the currently best
weighted approximation algorithm of Berman ([Ber00]) into a proof that reveals
more intuition on what is really happening behind the math. The main result is
a new bound of k/3 + 1 + epsilon on the integrality gap for a polynomially
sized LP relaxation for k-set packing by Chan and Lau ([CL10]) and the natural
SDP relaxation [NOTE: see page iii]. We provide detailed proofs of lemmas
needed to prove this new bound and treat some background on related topics like
semidefinite programming and the Lovasz Theta function. Finally we have an
extended discussion in which we suggest some possibilities for future research.
We discuss how the current results from the weighted approximation algorithms
and the LP and SDP relaxations might be improved, the strong relation between
set packing and the independent set problem and the difference between the
weighted and the unweighted version of the problem.Comment: There is a mistake in the following line of Theorem 17: "As an
induced subgraph of H with more edges than vertices constitutes an improving
set". Therefore, the proofs of Theorem 17, and hence Theorems 19, 23 and 24,
are false. It is still open whether these theorems are tru
Adversaries with Limited Information in the Friedkin--Johnsen Model
In recent years, online social networks have been the target of adversaries
who seek to introduce discord into societies, to undermine democracies and to
destabilize communities. Often the goal is not to favor a certain side of a
conflict but to increase disagreement and polarization. To get a mathematical
understanding of such attacks, researchers use opinion-formation models from
sociology, such as the Friedkin--Johnsen model, and formally study how much
discord the adversary can produce when altering the opinions for only a small
set of users. In this line of work, it is commonly assumed that the adversary
has full knowledge about the network topology and the opinions of all users.
However, the latter assumption is often unrealistic in practice, where user
opinions are not available or simply difficult to estimate accurately.
To address this concern, we raise the following question: Can an attacker sow
discord in a social network, even when only the network topology is known? We
answer this question affirmatively. We present approximation algorithms for
detecting a small set of users who are highly influential for the disagreement
and polarization in the network. We show that when the adversary radicalizes
these users and if the initial disagreement/polarization in the network is not
very high, then our method gives a constant-factor approximation on the setting
when the user opinions are known. To find the set of influential users, we
provide a novel approximation algorithm for a variant of MaxCut in graphs with
positive and negative edge weights. We experimentally evaluate our methods,
which have access only to the network topology, and we find that they have
similar performance as methods that have access to the network topology and all
user opinions. We further present an NP-hardness proof, which was an open
question by Chen and Racz [IEEE Trans. Netw. Sci. Eng., 2021].Comment: To appear at KDD'2
Subquadratic-time algorithm for the diameter and all eccentricities on median graphs
On sparse graphs, Roditty and Williams [2013] proved that no
-time algorithm achieves an approximation factor smaller
than for the diameter problem unless SETH fails. In this article,
we solve an open question formulated in the literature: can we use the
structural properties of median graphs to break this global quadratic barrier?
We propose the first combinatiorial algorithm computing exactly all
eccentricities of a median graph in truly subquadratic time. Median graphs
constitute the family of graphs which is the most studied in metric graph
theory because their structure represents many other discrete and geometric
concepts, such as CAT(0) cube complexes. Our result generalizes a recent one,
stating that there is a linear-time algorithm for all eccentricities in median
graphs with bounded dimension , i.e. the dimension of the largest induced
hypercube. This prerequisite on is not necessarily anymore to determine all
eccentricities in subquadratic time. The execution time of our algorithm is
.
We provide also some satellite outcomes related to this general result. In
particular, restricted to simplex graphs, this algorithm enumerates all
eccentricities with a quasilinear running time. Moreover, an algorithm is
proposed to compute exactly all reach centralities in time
.Comment: 43 pages, extended abstract in STACS 202
Exact Solutions for Discrete Graphical Models: Multicuts and Reduction Techniques
In the past years, discrete graphical models have become a major conceptual tool to model the structure of problems in image processing - example applications are image segmentation, image labeling, stereo vision, and tracking problems.
It is therefore crucial to have techniques which are able to handle the occurring optimization problems and to deliver good solutions.
Because of the hardness of these inference problems, so far mainly fast heuristic methods were used which yield approximate solutions.
In this thesis we present exact methods for obtaining optimal solutions for the energy minimization problem of discrete graphical models; image segmentation serves as the main application.
Since these problems are NP-hard in general, it is clear that in order to be able to handle problem sizes occurring in real-world applications one has to either (a) reduce the size of the problems or (b) restrict oneself to special problem classes.
Concerning (a), we develop a combination of existing and new preprocessing steps which transform models into equivalent yet less complex ones.
Concerning (b), we introduce the so-called multicut approach to image analysis: This is a generalization of the min s-t cut method which allows for solving models of a certain structure significantly faster than previously possible or even solving them to global optimality for the first time at all.
On the whole, we present methods which solve NP-hard problems to proven optimality and which in some cases are as fast or even faster than approximative methods