17,332 research outputs found
Residue symbols and Jantzen-Seitz partitions
Jantzen-Seitz partitions are those -regular partitions of~ which label
-modular irreducible representations of the symmetric group which
remain irreducible when restricted to ; they have recently also been
found to be important for certain exactly solvable models in statistical
mechanics. In this article we study their combinatorial properties via a
detailed analysis of their residue symbols; in particular the -cores of
Jantzen-Seitz partitions are determined
Model Checking CTL is Almost Always Inherently Sequential
The model checking problem for CTL is known to be P-complete (Clarke,
Emerson, and Sistla (1986), see Schnoebelen (2002)). We consider fragments of
CTL obtained by restricting the use of temporal modalities or the use of
negations---restrictions already studied for LTL by Sistla and Clarke (1985)
and Markey (2004). For all these fragments, except for the trivial case without
any temporal operator, we systematically prove model checking to be either
inherently sequential (P-complete) or very efficiently parallelizable
(LOGCFL-complete). For most fragments, however, model checking for CTL is
already P-complete. Hence our results indicate that, in cases where the
combined complexity is of relevance, approaching CTL model checking by
parallelism cannot be expected to result in any significant speedup. We also
completely determine the complexity of the model checking problem for all
fragments of the extensions ECTL, CTL+, and ECTL+
Foundations for Uniform Interpolation and Forgetting in Expressive Description Logics
We study uniform interpolation and forgetting in the description logic ALC.
Our main results are model-theoretic characterizations of uniform inter-
polants and their existence in terms of bisimula- tions, tight complexity
bounds for deciding the existence of uniform interpolants, an approach to
computing interpolants when they exist, and tight bounds on their size. We use
a mix of model- theoretic and automata-theoretic methods that, as a by-product,
also provides characterizations of and decision procedures for conservative
extensions
Robust Exponential Worst Cases for Divide-et-Impera Algorithms for Parity Games
The McNaughton-Zielonka divide et impera algorithm is the simplest and most
flexible approach available in the literature for determining the winner in a
parity game. Despite its theoretical worst-case complexity and the negative
reputation as a poorly effective algorithm in practice, it has been shown to
rank among the best techniques for the solution of such games. Also, it proved
to be resistant to a lower bound attack, even more than the strategy
improvements approaches, and only recently a family of games on which the
algorithm requires exponential time has been provided by Friedmann. An easy
analysis of this family shows that a simple memoization technique can help the
algorithm solve the family in polynomial time. The same result can also be
achieved by exploiting an approach based on the dominion-decomposition
techniques proposed in the literature. These observations raise the question
whether a suitable combination of dynamic programming and game-decomposition
techniques can improve on the exponential worst case of the original algorithm.
In this paper we answer this question negatively, by providing a robustly
exponential worst case, showing that no intertwining of the above mentioned
techniques can help mitigating the exponential nature of the divide et impera
approaches.Comment: In Proceedings GandALF 2017, arXiv:1709.0176
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
- …