455 research outputs found
A shortcut to (sun)flowers: Kernels in logarithmic space or linear time
We investigate whether kernelization results can be obtained if we restrict
kernelization algorithms to run in logarithmic space. This restriction for
kernelization is motivated by the question of what results are attainable for
preprocessing via simple and/or local reduction rules. We find kernelizations
for d-Hitting Set(k), d-Set Packing(k), Edge Dominating Set(k) and a number of
hitting and packing problems in graphs, each running in logspace. Additionally,
we return to the question of linear-time kernelization. For d-Hitting Set(k) a
linear-time kernelization was given by van Bevern [Algorithmica (2014)]. We
give a simpler procedure and save a large constant factor in the size bound.
Furthermore, we show that we can obtain a linear-time kernel for d-Set
Packing(k) as well.Comment: 18 page
Tensor-Network Simulations of Noisy Quantum Computers
Quantum computers are a rapidly developing technology with the ultimate goal
of outperforming their classical counterparts in a wide range of computational
tasks. Several types of quantum computers already operate with more than a
hundred qubits. However, their performance is hampered by interactions with
their environments, which destroy the fragile quantum information and thereby
prevent a significant speed-up over classical devices. For these reasons, it is
now important to explore the execution of quantum algorithms on noisy quantum
processors to better understand the limitations and prospects of realizing
near-term quantum computations. To this end, we here simulate the execution of
three quantum algorithms on noisy quantum computers using matrix product states
as a special class of tensor networks. Matrix product states are characterized
by their maximum bond dimension, which limits the amount of entanglement they
can describe, and which thereby can mimic the generic loss of entanglement in a
quantum computer. We analyze the fidelity of the quantum Fourier transform,
Grover's algorithm, and the quantum counting algorithm as a function of the
bond dimension, and we map out the entanglement that is generated during the
execution of these algorithms. For all three algorithms, we find that they can
be executed with high fidelity even at a moderate loss of entanglement. We also
identify the dependence of the fidelity on the number of qubits, which is
specific to each algorithm. Our approach provides a general method for
simulating noisy quantum computers, and it can be applied to a wide range of
algorithms.Comment: 15 pages, 12 figure
Towards Work-Efficient Parallel Parameterized Algorithms
Parallel parameterized complexity theory studies how fixed-parameter
tractable (fpt) problems can be solved in parallel. Previous theoretical work
focused on parallel algorithms that are very fast in principle, but did not
take into account that when we only have a small number of processors (between
2 and, say, 1024), it is more important that the parallel algorithms are
work-efficient. In the present paper we investigate how work-efficient fpt
algorithms can be designed. We review standard methods from fpt theory, like
kernelization, search trees, and interleaving, and prove trade-offs for them
between work efficiency and runtime improvements. This results in a toolbox for
developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of
the 13th International Conference and Workshops on Algorithms and Computation
(WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final
authenticated version is available online at
https://doi.org/10.1007/978-3-030-10564-8_2
On the (non-)existence of polynomial kernels for Pl-free edge modification problems
Given a graph G = (V,E) and an integer k, an edge modification problem for a
graph property P consists in deciding whether there exists a set of edges F of
size at most k such that the graph H = (V,E \vartriangle F) satisfies the
property P. In the P edge-completion problem, the set F of edges is constrained
to be disjoint from E; in the P edge-deletion problem, F is a subset of E; no
constraint is imposed on F in the P edge-edition problem. A number of
optimization problems can be expressed in terms of graph modification problems
which have been extensively studied in the context of parameterized complexity.
When parameterized by the size k of the edge set F, it has been proved that if
P is an hereditary property characterized by a finite set of forbidden induced
subgraphs, then the three P edge-modification problems are FPT. It was then
natural to ask whether these problems also admit a polynomial size kernel.
Using recent lower bound techniques, Kratsch and Wahlstrom answered this
question negatively. However, the problem remains open on many natural graph
classes characterized by forbidden induced subgraphs. Kratsch and Wahlstrom
asked whether the result holds when the forbidden subgraphs are paths or cycles
and pointed out that the problem is already open in the case of P4-free graphs
(i.e. cographs). This paper provides positive and negative results in that line
of research. We prove that parameterized cograph edge modification problems
have cubic vertex kernels whereas polynomial kernels are unlikely to exist for
the Pl-free and Cl-free edge-deletion problems for large enough l
COMPRESSIVE BEHAVIOR OF CONCRETE COLUMNS AXIALLYLOADED BEFORE CFRP-WRAPPING. REMARKS BY EXPERIMENTALNUMERICAL INVESTIGATION
Strengthening of existing concrete columns with Fiber Reinforced Polymers (FRP) results generally in a satisfactory structural member improvement in terms of load and strain capacity. A reliable prediction of the capacity obtained by these reinforcement strategies requests a proper knowledge of the load-strain response of the confined concrete elements. However, so far, the available design methods and technical codes do not consider the effect of the possible presence of service loads at the moment of application of the reinforcement, and therefore, the compressive behavior of the concrete confined under preload is still unclear. In this paper, the effect of sustained loads on the compressive behavior of concrete columns CFRP-confined while preloaded is analyzed. Experimental tests were performed on circular concrete columns confined under low, medium and high preload levels before wrapping ad subsequently loaded until failure, observing the differences respect to the standard compressive stress-strain response of FRP-confined concrete. A finite element (FE) model is also developed by using ABAQUS software to simulate the physical scheme of the experimental tests. The accuracy of the model is validated through comparing with the experimental results
Deeply virtual Compton scattering in next-to-leading order
We study the amplitude of deeply virtual Compton scattering in
next-to-leading order of perturbation theory including the two-loop evolution
effects for different sets of skewed parton distributions (SPDs). It turns out
that in the minimal subtraction scheme the relative radiative corrections are
of order 20-50%. We analyze the dependence of our predictions on the choice of
SPD, that will allow to discriminate between possible models of SPDs from
future high precision experimental data, and discuss shortly theoretical
uncertainties induced by the radiative corrections.Comment: 10 pages, LaTeX, 3 figure
A New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and its Algorithmic Applications
A pair of unit clauses is called conflicting if it is of the form ,
. A CNF formula is unit-conflict free (UCF) if it contains no pair
of conflicting unit clauses. Lieberherr and Specker (J. ACM 28, 1981) showed
that for each UCF CNF formula with clauses we can simultaneously satisfy at
least \pp m clauses, where \pp =(\sqrt{5}-1)/2. We improve the
Lieberherr-Specker bound by showing that for each UCF CNF formula with
clauses we can find, in polynomial time, a subformula with clauses
such that we can simultaneously satisfy at least \pp m+(1-\pp)m'+(2-3\pp)n"/2
clauses (in ), where is the number of variables in which are not in
.
We consider two parameterized versions of MAX-SAT, where the parameter is the
number of satisfied clauses above the bounds and . The
former bound is tight for general formulas, and the later is tight for UCF
formulas. Mahajan and Raman (J. Algorithms 31, 1999) showed that every instance
of the first parameterized problem can be transformed, in polynomial time, into
an equivalent one with at most variables and clauses. We improve
this to variables and clauses. Mahajan and Raman
conjectured that the second parameterized problem is fixed-parameter tractable
(FPT). We show that the problem is indeed FPT by describing a polynomial-time
algorithm that transforms any problem instance into an equivalent one with at
most variables. Our results are obtained using our improvement
of the Lieberherr-Specker bound above
Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter
An important result in the study of polynomial-time preprocessing shows that
there is an algorithm which given an instance (G,k) of Vertex Cover outputs an
equivalent instance (G',k') in polynomial time with the guarantee that G' has
at most 2k' vertices (and thus O((k')^2) edges) with k' <= k. Using the
terminology of parameterized complexity we say that k-Vertex Cover has a kernel
with 2k vertices. There is complexity-theoretic evidence that both 2k vertices
and Theta(k^2) edges are optimal for the kernel size. In this paper we consider
the Vertex Cover problem with a different parameter, the size fvs(G) of a
minimum feedback vertex set for G. This refined parameter is structurally
smaller than the parameter k associated to the vertex covering number vc(G)
since fvs(G) <= vc(G) and the difference can be arbitrarily large. We give a
kernel for Vertex Cover with a number of vertices that is cubic in fvs(G): an
instance (G,X,k) of Vertex Cover, where X is a feedback vertex set for G, can
be transformed in polynomial time into an equivalent instance (G',X',k') such
that |V(G')| <= 2k and |V(G')| <= O(|X'|^3). A similar result holds when the
feedback vertex set X is not given along with the input. In sharp contrast we
show that the Weighted Vertex Cover problem does not have a polynomial kernel
when parameterized by the cardinality of a given vertex cover of the graph
unless NP is in coNP/poly and the polynomial hierarchy collapses to the third
level.Comment: Published in "Theory of Computing Systems" as an Open Access
publicatio
- …