318 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Generic multiplicative endomorphism of a field
We introduce the model-companion of the theory of fields expanded by a unary
function for a multiplicative map, which we call ACFH. Among others, we prove
that this theory is NSOP and not simple, that the kernel of the map is a
generic pseudo-finite abelian group. We also prove that if forking satisfies
existence, then ACFH has elimination of imaginaries.Comment: 34 page
MAXIMALITY OF LOGIC WITHOUT IDENTITY
Lindström’s theorem obviously fails as a characterization of first-order logic without identity ( L
−
ωω
). In this note, we provide a fix: we show that L
−
ωω
is a maximal abstract logic satisfying a weak form of the isomorphism property (suitable for identity-free languages and studied in [11]), the Löwenheim–Skolem property, and compactness. Furthermore, we show that compactness can be replaced by being recursively enumerable for validity under certain conditions. In the proofs, we use a form of strong upwards Löwenheim–Skolem theorem not available in the framework with identity
On linear, fractional, and submodular optimization
In this thesis, we study four fundamental problems in the theory of optimization. 1. In fractional optimization, we are interested in minimizing a ratio of two functions over some domain. A well-known technique for solving this problem is the Newton– Dinkelbach method. We propose an accelerated version of this classical method and give a new analysis using the Bregman divergence. We show how it leads to improved or simplified results in three application areas. 2. The diameter of a polyhedron is the maximum length of a shortest path between any two vertices. The circuit diameter is a relaxation of this notion, whereby shortest paths are not restricted to edges of the polyhedron. For a polyhedron in standard equality form with constraint matrix A, we prove an upper bound on the circuit diameter that is quadratic in the rank of A and logarithmic in the circuit imbalance measure of A. We also give circuit augmentation algorithms for linear programming with similar iteration complexity. 3. The correlation gap of a set function is the ratio between its multilinear and concave extensions. We present improved lower bounds on the correlation gap of a matroid rank function, parametrized by the rank and girth of the matroid. We also prove that for a weighted matroid rank function, the worst correlation gap is achieved with uniform weights. Such improved lower bounds have direct applications in submodular maximization and mechanism design. 4. The last part of this thesis concerns parity games, a problem intimately related to linear programming. A parity game is an infinite-duration game between two players on a graph. The problem of deciding the winner lies in NP and co-NP, with no known polynomial algorithm to date. Many of the fastest (quasi-polynomial) algorithms have been unified via the concept of a universal tree. We propose a strategy iteration framework which can be applied on any universal tree
The Long Search for Collatz Counterexamples
Despite decades of effort, the Collatz conjecture remains neither proved, nor refuted by a counterexample, nor formally shown to be undecidable. This note introduces the Collatz problem and probes its logical depth with a test question: can the search space for counterexamples be iteratively reduced, and when would it help
Efficient parameterized algorithms on structured graphs
In der klassischen Komplexitätstheorie werden worst-case Laufzeiten von Algorithmen typischerweise einzig abhängig von der Eingabegröße angegeben. In dem Kontext der parametrisierten Komplexitätstheorie versucht man die Analyse der Laufzeit dahingehend zu verfeinern, dass man zusätzlich zu der Eingabengröße noch einen Parameter berücksichtigt, welcher angibt, wie strukturiert die Eingabe bezüglich einer gewissen Eigenschaft ist. Ein parametrisierter Algorithmus nutzt dann diese beschriebene Struktur aus und erreicht so eine Laufzeit, welche schneller ist als die eines besten unparametrisierten Algorithmus, falls der Parameter klein ist.
Der erste Hauptteil dieser Arbeit führt die Forschung in diese Richtung weiter aus und untersucht den Einfluss von verschieden Parametern auf die Laufzeit von bekannten effizient lösbaren Problemen. Einige vorgestellte Algorithmen sind dabei adaptive Algorithmen, was bedeutet, dass die Laufzeit von diesen Algorithmen mit der Laufzeit des besten unparametrisierten Algorithm für den größtmöglichen Parameterwert übereinstimmt und damit theoretisch niemals schlechter als die besten unparametrisierten Algorithmen und übertreffen diese bereits für leicht nichttriviale Parameterwerte.
Motiviert durch den allgemeinen Erfolg und der Vielzahl solcher parametrisierten Algorithmen, welche eine vielzahl verschiedener Strukturen ausnutzen, untersuchen wir im zweiten Hauptteil dieser Arbeit, wie man solche unterschiedliche homogene Strukturen zu mehr heterogenen Strukturen vereinen kann. Ausgehend von algebraischen Ausdrücken, welche benutzt werden können, um von Parametern beschriebene Strukturen zu definieren, charakterisieren wir klar und robust heterogene Strukturen und zeigen exemplarisch, wie sich die Parameter tree-depth und modular-width heterogen verbinden lassen. Wir beschreiben dazu effiziente Algorithmen auf heterogenen Strukturen mit Laufzeiten, welche im Spezialfall mit den homogenen Algorithmen übereinstimmen.In classical complexity theory, the worst-case running times of algorithms depend solely on the size of the input. In parameterized complexity the goal is to refine the analysis of the running time of an algorithm by additionally considering a parameter that measures some kind of structure in the input. A parameterized algorithm then utilizes the structure described by the parameter and achieves a running time that is faster than the best general (unparameterized) algorithm for instances of low parameter value.
In the first part of this thesis, we carry forward in this direction and investigate the influence of several parameters on the running times of well-known tractable problems.
Several presented algorithms are adaptive algorithms, meaning that they match the running time of a best unparameterized algorithm for worst-case parameter values. Thus, an adaptive parameterized algorithm is asymptotically never worse than the best unparameterized algorithm, while it outperforms the best general algorithm already for slightly non-trivial parameter values.
As illustrated in the first part of this thesis, for many problems there exist efficient parameterized algorithms regarding multiple parameters, each describing a different kind of structure.
In the second part of this thesis, we explore how to combine such homogeneous structures to more general and heterogeneous structures.
Using algebraic expressions, we define new combined graph classes
of heterogeneous structure in a clean and robust way, and we showcase this for the heterogeneous merge of the parameters tree-depth and modular-width, by presenting parameterized algorithms
on such heterogeneous graph classes and getting running times that match the homogeneous cases throughout
On Notions of Provability
In this thesis, we study notions of provability, i.e. formulas B(x,y) such that a formula
ϕ is provable in T if, and only if, there is m ∈ N such that T ⊢ B(⌜ϕ⌝,m) (m plays the
role of a parameter); the usual notion of provability, k-step provability (also known as
k-provability), s-symbols provability are examples of notions of provability.
We develop general results concerning notions of provability, but we also study in
detail concrete notions. We present partial results concerning the decidability of kprovability
for Peano Arithmetic (PA), and we study important problems concerning
k-provability, such as Kreisel’s Conjecture and Montagna’s Problem:
(∀n ∈ N.T ⊢k steps ϕ(n)) =⇒ T ⊢ ∀x.ϕ(x), [Kreisel’s Conjecture]
and
Does PA ⊢k steps PrPA(⌜ϕ⌝)→ϕ imply PA ⊢k steps ϕ? [Montagna’s Problem]
Incompleteness, Undefinability of Truth, and Recursion are different entities that
share important features; we study this in detail and we trace these entities to common
results.
We present numeral forms of completeness and consistency, numeral completeness
and numeral consistency, respectively; numeral completeness guarantees that, whenever
a Σb
1(S12
)-formula ϕ(⃗x ) is such that ⃗Q
⃗x .ϕ(⃗x ) is true (where ⃗Q
is any array of quantifiers),
then this very fact can be proved inside S12
, more precisely S12
⊢ ⃗Q
⃗x .Prτ (⌜ϕ(
•⃗
x )⌝). We
examine these two results from a mathematical point of view by presenting the minimal
conditions to state them and by finding consequences of them, and from a philosophical
point of view by relating them to Hilbert’s Program.
The derivability condition “provability implies provable provability” is one of the main
derivability conditions used to derive the Second Incompleteness Theorem and is known
to be very sensitive to the underlying theory one has at hand. We create a weak theory
G2 to study this condition; this is a theory for the complexity class FLINSPACE. We also
relate properties of G2 to equality between computational classes.O tema desta tese são noções de demonstração; estas últimas são fórmulas B(x,y) tais que
uma fórmula ϕ é demonstrável em T se, e só se, existe m ∈ N tal que T ⊢ B(⌜ϕ⌝,m) (m
desempenha o papel de um parâmetro). A noção usual de demonstração, demonstração
em k-linhas (demonstração-k), demonstração em s-símbolos são exemplos de noções de
demonstração.
Desenvolvemos resultados gerais sobre noções de demonstração, mas também estudamos
exemplos concretos. Damos a conhecer resultados parciais sobre a decidibilidade da
demonstração-k para a Aritmética de Peano (PA), e estudamos dois problemas conhecidos
desta área, a Conjectura de Kreisel e o Problema de Montagna:
(∀n ∈ N.T ⊢k steps ϕ(n)) =⇒ T ⊢ ∀x.ϕ(x), [Conjectura de Kreisel]
e
PA ⊢k steps PrPA(⌜ϕ⌝)→ϕ implica PA ⊢k steps ϕ? [Problema de Montagna]
A Incompletude, a Incapacidade de Definir Verdade, e Recursão são entidades que
têm em comum características relevantes; nós estudamos estas entidades em detalhe e
apresentamos resultados que são simultaneamente responsáveis pelas mesmas.
Além disso, apresentamos formas numerais de completude e consistência, a completude
numeral e a consistência numeral, respectivamente; a completude numeral assegura
que, quando uma fórmula-Σb
1(S12) ϕ(⃗x ) é tal que ⃗Q
⃗x .ϕ(⃗x ) é verdadeira, então este facto
pode ser verificado dentro de S12, mais precisamente S12
⊢ ⃗Q
⃗x .Prτ (⌜ϕ(
•⃗
x )⌝). Este dois resultados
são analisados de um ponto de vista matemático onde apresentamos as condições
mínimas para os demonstrar e apresentamos consequências dos mesmos, e de um ponto
de vista filosófico, onde relacionamos os mesmos com o Programa de Hilbert.
A condição de derivabilidade “demonstração implica demonstrabilidade da demonstração”
é uma das condições usadas para derivar o Segundo Teorema da Incompletude e
sabemos ser muito sensível à teoria de base escolhida. Nós criámos uma teoria fraca G2
para estudar esta condição; esta é uma teoria para a classe de complexidade FLINSPACE.
Também relacionámos propriedades de G2 com igualdades entre classes de complexidade
computacional
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Regularization and Optimal Multiclass Learning
The quintessential learning algorithm of empirical risk minimization (ERM) is
known to fail in various settings for which uniform convergence does not
characterize learning. It is therefore unsurprising that the practice of
machine learning is rife with considerably richer algorithmic techniques for
successfully controlling model capacity. Nevertheless, no such technique or
principle has broken away from the pack to characterize optimal learning in
these more general settings.
The purpose of this work is to characterize the role of regularization in
perhaps the simplest setting for which ERM fails: multiclass learning with
arbitrary label sets. Using one-inclusion graphs (OIGs), we exhibit optimal
learning algorithms that dovetail with tried-and-true algorithmic principles:
Occam's Razor as embodied by structural risk minimization (SRM), the principle
of maximum entropy, and Bayesian reasoning. Most notably, we introduce an
optimal learner which relaxes structural risk minimization on two dimensions:
it allows the regularization function to be "local" to datapoints, and uses an
unsupervised learning stage to learn this regularizer at the outset. We justify
these relaxations by showing that they are necessary: removing either dimension
fails to yield a near-optimal learner. We also extract from OIGs a
combinatorial sequence we term the Hall complexity, which is the first to
characterize a problem's transductive error rate exactly.
Lastly, we introduce a generalization of OIGs and the transductive learning
setting to the agnostic case, where we show that optimal orientations of
Hamming graphs -- judged using nodes' outdegrees minus a system of
node-dependent credits -- characterize optimal learners exactly. We demonstrate
that an agnostic version of the Hall complexity again characterizes error rates
exactly, and exhibit an optimal learner using maximum entropy programs.Comment: 40 pages, 2 figure
The category of MSO transductions
MSO transductions are binary relations between structures which are defined
using monadic second-order logic. MSO transductions form a category, since they
are closed under composition. We show that many notions from language theory,
such as recognizability or tree decompositions, can be defined in an abstract
way that only refers to MSO transductions and their compositions
- …