779 research outputs found
Controlling iterated jumps of solutions to combinatorial problems
Among the Ramsey-type hierarchies, namely, Ramsey's theorem, the free set,
the thin set and the rainbow Ramsey theorem, only Ramsey's theorem is known to
collapse in reverse mathematics. A promising approach to show the strictness of
the hierarchies would be to prove that every computable instance at level n has
a low_n solution. In particular, this requires effective control of iterations
of the Turing jump. In this paper, we design some variants of Mathias forcing
to construct solutions to cohesiveness, the Erdos-Moser theorem and stable
Ramsey's theorem for pairs, while controlling their iterated jumps. For this,
we define forcing relations which, unlike Mathias forcing, have the same
definitional complexity as the formulas they force. This analysis enables us to
answer two questions of Wei Wang, namely, whether cohesiveness and the
Erdos-Moser theorem admit preservation of the arithmetic hierarchy, and can be
seen as a step towards the resolution of the strictness of the Ramsey-type
hierarchies.Comment: 32 page
Open questions about Ramsey-type statements in reverse mathematics
Ramsey's theorem states that for any coloring of the n-element subsets of N
with finitely many colors, there is an infinite set H such that all n-element
subsets of H have the same color. The strength of consequences of Ramsey's
theorem has been extensively studied in reverse mathematics and under various
reducibilities, namely, computable reducibility and uniform reducibility. Our
understanding of the combinatorics of Ramsey's theorem and its consequences has
been greatly improved over the past decades. In this paper, we state some
questions which naturally arose during this study. The inability to answer
those questions reveals some gaps in our understanding of the combinatorics of
Ramsey's theorem.Comment: 15 page
The weakness of the pigeonhole principle under hyperarithmetical reductions
The infinite pigeonhole principle for 2-partitions ()
asserts the existence, for every set , of an infinite subset of or of
its complement. In this paper, we study the infinite pigeonhole principle from
a computability-theoretic viewpoint. We prove in particular that
admits strong cone avoidance for arithmetical and
hyperarithmetical reductions. We also prove the existence, for every
set, of an infinite low subset of it or its complement. This
answers a question of Wang. For this, we design a new notion of forcing which
generalizes the first and second-jump control of Cholak, Jockusch and Slaman.Comment: 29 page
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Simheuristic and learnheuristic algorithms for the temporary-facility location and queuing problem during population treatment or testing events
Epidemic outbreaks, such as the one generated by the coronavirus disease, have raised the need for more efficient healthcare logistics. One of the challenges that many governments have to face in such scenarios is the deployment of temporary medical facilities across a region with the purpose of providing medical services to their citizens. This work tackles this temporary-facility location and queuing problem with the goals of minimizing costs, the expected completion time, population travel and waiting times. The completion time for a facility depends on the numbers assigned to those facilities as well as stochastic arrival times. This work proposes a learnheuristic algorithm to solve the facility location and population assignment problem. Firstly a machine learning algorithm is trained using data from a queuing model (simulation module). The learnheuristic then constructs solutions using the machine learning algorithm to rapidly evaluate decisions in terms of facility completion and population waiting times. The efficiency and quality of the algorithm is demonstrated by comparison with exact and simulation-only (simheuristic) methodologies. A series of experiments are performed which explore the trade offs between solution cost, completion time, population travel and waiting times.Peer ReviewedPostprint (author's final draft
Iterative forcing and hyperimmunity in reverse mathematics
The separation between two theorems in reverse mathematics is usually done by
constructing a Turing ideal satisfying a theorem P and avoiding the solutions
to a fixed instance of a theorem Q. Lerman, Solomon and Towsner introduced a
forcing technique for iterating a computable non-reducibility in order to
separate theorems over omega-models. In this paper, we present a modularized
version of their framework in terms of preservation of hyperimmunity and show
that it is powerful enough to obtain the same separations results as Wang did
with his notion of preservation of definitions.Comment: 15 page
- …