5,325 research outputs found
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
New results on rewrite-based satisfiability procedures
Program analysis and verification require decision procedures to reason on
theories of data structures. Many problems can be reduced to the satisfiability
of sets of ground literals in theory T. If a sound and complete inference
system for first-order logic is guaranteed to terminate on T-satisfiability
problems, any theorem-proving strategy with that system and a fair search plan
is a T-satisfiability procedure. We prove termination of a rewrite-based
first-order engine on the theories of records, integer offsets, integer offsets
modulo and lists. We give a modularity theorem stating sufficient conditions
for termination on a combinations of theories, given termination on each. The
above theories, as well as others, satisfy these conditions. We introduce
several sets of benchmarks on these theories and their combinations, including
both parametric synthetic benchmarks to test scalability, and real-world
problems to test performances on huge sets of literals. We compare the
rewrite-based theorem prover E with the validity checkers CVC and CVC Lite.
Contrary to the folklore that a general-purpose prover cannot compete with
reasoners with built-in theories, the experiments are overall favorable to the
theorem prover, showing that not only the rewriting approach is elegant and
conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page
On the Expressivity and Applicability of Model Representation Formalisms
A number of first-order calculi employ an explicit model representation
formalism for automated reasoning and for detecting satisfiability. Many of
these formalisms can represent infinite Herbrand models. The first-order
fragment of monadic, shallow, linear, Horn (MSLH) clauses, is such a formalism
used in the approximation refinement calculus. Our first result is a finite
model property for MSLH clause sets. Therefore, MSLH clause sets cannot
represent models of clause sets with inherently infinite models. Through a
translation to tree automata, we further show that this limitation also applies
to the linear fragments of implicit generalizations, which is the formalism
used in the model-evolution calculus, to atoms with disequality constraints,
the formalisms used in the non-redundant clause learning calculus (NRCL), and
to atoms with membership constraints, a formalism used for example in decision
procedures for algebraic data types. Although these formalisms cannot represent
models of clause sets with inherently infinite models, through an additional
approximation step they can. This is our second main result. For clause sets
including the definition of an equivalence relation with the help of an
additional, novel approximation, called reflexive relation splitting, the
approximation refinement calculus can automatically show satisfiability through
the MSLH clause set formalism.Comment: 15 page
On the Expressivity and Applicability of Model Representation Formalisms
A number of first-order calculi employ an explicit model representation formalism for automated reasoning and for detecting satisfiability. Many of these formalisms can represent infinite Herbrand models. The first-order fragment of monadic, shallow, linear, Horn (MSLH) clauses, is such a formalism used in the approximation refinement calculus. Our first result is a finite model property for MSLH clause sets. Therefore, MSLH clause sets cannot represent models of clause sets with inherently infinite models. Through a translation to tree automata, we further show that this limitation also applies to the linear fragments of implicit generalizations, which is the formalism used in the model-evolution calculus, to atoms with disequality constraints, the formalisms used in the non-redundant clause learning calculus (NRCL), and to atoms with membership constraints, a formalism used for example in decision procedures for algebraic data types. Although these formalisms cannot represent models of clause sets with inherently infinite models, through an additional approximation step they can. This is our second main result. For clause sets including the definition of an equivalence relation with the help of an additional, novel approximation, called reflexive relation splitting, the approximation refinement calculus can automatically show satisfiability through the MSLH clause set formalism
Concepts and Their Dynamics: A Quantum-Theoretic Modeling of Human Thought
We analyze different aspects of our quantum modeling approach of human
concepts, and more specifically focus on the quantum effects of contextuality,
interference, entanglement and emergence, illustrating how each of them makes
its appearance in specific situations of the dynamics of human concepts and
their combinations. We point out the relation of our approach, which is based
on an ontology of a concept as an entity in a state changing under influence of
a context, with the main traditional concept theories, i.e. prototype theory,
exemplar theory and theory theory. We ponder about the question why quantum
theory performs so well in its modeling of human concepts, and shed light on
this question by analyzing the role of complex amplitudes, showing how they
allow to describe interference in the statistics of measurement outcomes, while
in the traditional theories statistics of outcomes originates in classical
probability weights, without the possibility of interference. The relevance of
complex numbers, the appearance of entanglement, and the role of Fock space in
explaining contextual emergence, all as unique features of the quantum
modeling, are explicitly revealed in this paper by analyzing human concepts and
their dynamics.Comment: 31 pages, 5 figure
Combination of convex theories: Modularity, deduction completeness, and explanation
AbstractDecision procedures are key components of theorem provers and constraint satisfaction systems. Their modular combination is of prime interest for building efficient systems, but their effective use is often limited by poor interface capabilities, when such procedures only provide a simple “sat/unsat” answer. In this paper, we develop a framework to design cooperation schemas between such procedures while maintaining modularity of their interfaces. First, we use the framework to specify and prove the correctness of classic combination schemas by Nelson–Oppen and Shostak. Second, we introduce the concept of deduction complete satisfiability procedures, we show how to build them for large classes of theories, then we provide a schema to modularly combine them. Third, we consider the problem of modularly constructing explanations for combinations by re-using available proof-producing procedures for the component theories
- …