120 research outputs found
On Vanishing of {K}ronecker Coefficients
It is shown that: (1) The problem of deciding positivity of Kronecker coefficients is NP-hard. (2) There exists a positive ()-formula for a subclass of Kronecker coefficients whose positivity is NP-hard to decide. (3) For any , there exists such that, for all , there exist partition triples in the Kronecker cone such that: (a) the Kronecker coefficient is zero, (b) the height of is , (c) the height of is , and (d) . The last result takes a step towards proving the existence of occurrence-based representation-theoretic obstructions in the context of the GCT approach to the permanent vs. determinant problem. Its proof also illustrates the effectiveness of the explicit proof strategy of GCT
Intrusion Detection Systems for Community Wireless Mesh Networks
Wireless mesh networks are being increasingly used to provide affordable network connectivity to communities where wired deployment strategies are either not possible or are prohibitively expensive. Unfortunately, computer networks (including mesh networks) are frequently being exploited by increasingly profit-driven and insidious attackers, which can affect their utility for legitimate use. In response to this, a number of countermeasures have been developed, including intrusion detection systems that aim to detect anomalous behaviour caused by attacks. We present a set of socio-technical challenges associated with developing an intrusion detection system for a community wireless mesh network. The attack space on a mesh network is particularly large; we motivate the need for and describe the challenges of adopting an asset-driven approach to managing this space. Finally, we present an initial design of a modular architecture for intrusion detection, highlighting how it addresses the identified challenges
Improved Implementation of Point Location in General Two-Dimensional Subdivisions
We present a major revamp of the point-location data structure for general
two-dimensional subdivisions via randomized incremental construction,
implemented in CGAL, the Computational Geometry Algorithms Library. We can now
guarantee that the constructed directed acyclic graph G is of linear size and
provides logarithmic query time. Via the construction of the Voronoi diagram
for a given point set S of size n, this also enables nearest-neighbor queries
in guaranteed O(log n) time. Another major innovation is the support of general
unbounded subdivisions as well as subdivisions of two-dimensional parametric
surfaces such as spheres, tori, cylinders. The implementation is exact,
complete, and general, i.e., it can also handle non-linear subdivisions. Like
the previous version, the data structure supports modifications of the
subdivision, such as insertions and deletions of edges, after the initial
preprocessing. A major challenge is to retain the expected O(n log n)
preprocessing time while providing the above (deterministic) space and
query-time guarantees. We describe an efficient preprocessing algorithm, which
explicitly verifies the length L of the longest query path in O(n log n) time.
However, instead of using L, our implementation is based on the depth D of G.
Although we prove that the worst case ratio of D and L is Theta(n/log n), we
conjecture, based on our experimental results, that this solution achieves
expected O(n log n) preprocessing time.Comment: 21 page
Guarding art galleries by guarding witnesses
Let P be a simple polygon. We de ne a witness set W to be a set of points su
h that if any (prospective) guard set G guards W, then it is guaranteed that G guards P . We show that not all polygons admit a nite witness set. If a fi nite minimal witness set exists, then it cannot contain any witness in the interior of P ; all witnesses must lie on the boundary of P , and there
an be at most one witness in the interior of any edge. We give an algorithm to compute a minimal witness set for P in O(n2 log n) time, if such a set exists, or to report the non-existence within the same time bounds. We also outline an algorithm that uses a witness set for P to test whether a (prospective) guard set sees all points in P
On the computation of zone and double zone diagrams
Classical objects in computational geometry are defined by explicit
relations. Several years ago the pioneering works of T. Asano, J. Matousek and
T. Tokuyama introduced "implicit computational geometry", in which the
geometric objects are defined by implicit relations involving sets. An
important member in this family is called "a zone diagram". The implicit nature
of zone diagrams implies, as already observed in the original works, that their
computation is a challenging task. In a continuous setting this task has been
addressed (briefly) only by these authors in the Euclidean plane with point
sites. We discuss the possibility to compute zone diagrams in a wide class of
spaces and also shed new light on their computation in the original setting.
The class of spaces, which is introduced here, includes, in particular,
Euclidean spheres and finite dimensional strictly convex normed spaces. Sites
of a general form are allowed and it is shown that a generalization of the
iterative method suggested by Asano, Matousek and Tokuyama converges to a
double zone diagram, another implicit geometric object whose existence is known
in general. Occasionally a zone diagram can be obtained from this procedure.
The actual (approximate) computation of the iterations is based on a simple
algorithm which enables the approximate computation of Voronoi diagrams in a
general setting. Our analysis also yields a few byproducts of independent
interest, such as certain topological properties of Voronoi cells (e.g., that
in the considered setting their boundaries cannot be "fat").Comment: Very slight improvements (mainly correction of a few typos); add DOI;
Ref [51] points to a freely available computer application which implements
the algorithms; to appear in Discrete & Computational Geometry (available
online
Parallel Write-Efficient Algorithms and Data Structures for Computational Geometry
In this paper, we design parallel write-efficient geometric algorithms that
perform asymptotically fewer writes than standard algorithms for the same
problem. This is motivated by emerging non-volatile memory technologies with
read performance being close to that of random access memory but writes being
significantly more expensive in terms of energy and latency. We design
algorithms for planar Delaunay triangulation, -d trees, and static and
dynamic augmented trees. Our algorithms are designed in the recently introduced
Asymmetric Nested-Parallel Model, which captures the parallel setting in which
there is a small symmetric memory where reads and writes are unit cost as well
as a large asymmetric memory where writes are times more expensive
than reads. In designing these algorithms, we introduce several techniques for
obtaining write-efficiency, including DAG tracing, prefix doubling,
reconstruction-based rebalancing and -labeling, which we believe will
be useful for designing other parallel write-efficient algorithms
Hamiltonian Cycle Parameterized by Treedepth in Single Exponential Time and Polynomial Space
For many algorithmic problems on graphs of treewidth , a standard dynamic
programming approach gives an algorithm with time and space complexity
. It turns out that when one
considers the more restrictive parameter treedepth, it is often the case that a
variation of this technique can be used to reduce the space complexity to
polynomial, while retaining time complexity of the form
, where is the treedepth. This
transfer of methodology is, however, far from automatic. For instance, for
problems with connectivity constraints, standard dynamic programming techniques
give algorithms with time and space complexity on graphs of treewidth , but it is not clear how to
convert them into time-efficient polynomial space algorithms for graphs of low
treedepth.
Cygan et al. (FOCS'11) introduced the Cut&Count technique and showed that a
certain class of problems with connectivity constraints can be solved in time
and space complexity . Recently,
Hegerfeld and Kratsch (STACS'20) showed that, for some of those problems, the
Cut&Count technique can be also applied in the setting of treedepth, and it
gives algorithms with running time
and polynomial space usage. However, a number of important problems eluded such
a treatment, with the most prominent examples being Hamiltonian Cycle and
Longest Path.
In this paper we clarify the situation by showing that Hamiltonian Cycle,
Hamiltonian Path, Long Cycle, Long Path, and Min Cycle Cover all admit
-time and polynomial space algorithms on graphs of
treedepth . The algorithms are randomized Monte Carlo with only false
negatives.Comment: Presented at WG2020. 20 pages, 2 figure
On Σ A Σ A Σ Circuits: The Role of Middle Σ Fan-in, Homogeneity and Bottom Degree.
We study polynomials computed by depth five Σ ∧ Σ ∧ Σ arithmetic circuits where ‘Σ’ and ‘∧’ represent gates that compute sum and power of their inputs respectively. Such circuits compute polynomials of the form Pt i=1 Q αi i , where Qi = Pri j=1 ` dij ij where `ij are linear forms and ri, αi, t > 0. These circuits are a natural generalization of the well known class of Σ ∧ Σ circuits and received significant attention recently. We prove an exponential lower bound for the monomial x1 · · · xn against depth five Σ ∧ Σ [≤n] ∧ [≥21] Σ and Σ ∧ Σ [≤2 √n/1000] ∧ [≥ √n] Σ arithmetic circuits where the bottom Σ gate is homogeneous. Our results show that the fan-in of the middle Σ gates, the degree of the bottom powering gates and the homogeneity at the bottom Σ gates play a crucial role in the computational power of Σ ∧ Σ ∧ Σ circuits
Utilitarian Mechanism Design for Multiobjective Optimization
In a classic optimization problem, the complete input data is assumed to be known to the algorithm. This assumption may not be true anymore in optimization problems motivated by the Internet where part of the input data is private knowledge of independent selfish agents. The goal of algorithmic mechanism design is to provide (in polynomial time) a solution to the optimization problem and a set of incentives for the agents such that disclosing the input data is a dominant strategy for the agents. In the case of NP-hard problems, the solution computed should also be a good approximation of the optimum. In this paper we focus on mechanism design for multiobjective optimization problems. In this setting we are given a main objective function and a set of secondary objectives which are modeled via budget constraints. Multiobjective optimization is a natural setting for mechanism design as many economical choices ask for a compromise between different, partially conflicting goals. The main contribution of this paper is showing that two of the main tools for the design of approximation algorithms for multiobjective optimization problems, namely, approximate Pareto sets and Lagrangian relaxation, can lead to truthful approximation schemes. By exploiting the method of approximate Pareto sets, we devise truthful deterministic and randomized multicriteria fully polynomial-time approximation schemes (FPTASs) for multiobjective optimization problems whose exact version admits a pseudopolynomial-time algorithm, as, for instance, the multibudgeted versions of minimum spanning tree, shortest path, maximum (perfect) matching, and matroid intersection. Our construction also applies to multidimensional knapsack and multiunit combinatorial auctions. Our FPTASs compute a -approximate solution violating each budget constraint by a factor . When feasible solutions induce an independence system, i.e., when subsets of feasible solutions are feasible as well, we present a PTAS (not violating any constraint), which combines the approach above with a novel monotone way to guess the heaviest elements in the optimum solution. Finally, we present a universally truthful Las Vegas PTAS for minimum spanning tree with a single budget constraint, where one wants to compute a minimum cost spanning tree whose length is at most a given value . This result is based on the Lagrangian relaxation method, in combination with our monotone guessing step and with a random perturbation step (ensuring low expected running time). This result can be derandomized in the case of integral lengths. All the mentioned results match the best known approximation ratios, which are, however, obtained by nontruthful algorithms
- …