1,664 research outputs found
Branch-and-Prune Search Strategies for Numerical Constraint Solving
When solving numerical constraints such as nonlinear equations and
inequalities, solvers often exploit pruning techniques, which remove redundant
value combinations from the domains of variables, at pruning steps. To find the
complete solution set, most of these solvers alternate the pruning steps with
branching steps, which split each problem into subproblems. This forms the
so-called branch-and-prune framework, well known among the approaches for
solving numerical constraints. The basic branch-and-prune search strategy that
uses domain bisections in place of the branching steps is called the bisection
search. In general, the bisection search works well in case (i) the solutions
are isolated, but it can be improved further in case (ii) there are continuums
of solutions (this often occurs when inequalities are involved). In this paper,
we propose a new branch-and-prune search strategy along with several variants,
which not only allow yielding better branching decisions in the latter case,
but also work as well as the bisection search does in the former case. These
new search algorithms enable us to employ various pruning techniques in the
construction of inner and outer approximations of the solution set. Our
experiments show that these algorithms speed up the solving process often by
one order of magnitude or more when solving problems with continuums of
solutions, while keeping the same performance as the bisection search when the
solutions are isolated.Comment: 43 pages, 11 figure
Recommended from our members
An Ontological formalization of the planning task
In this paper we propose a generic task ontology, which formalizes the space of planning problems. Although planning is one of the oldest researched areas in Artificial Intelligence and attempts have been made in the past at developing task ontologies for planning, these formalizations suffer from serious limitations: they do not exhibit the required level of formalization and precision and they usually fail to include some of the key concepts required for specifying planning problems. In con-trast with earlier proposals, our task ontology formalizes the nature of the planning task independently of any planning paradigm, specific domains, or applications and provides a fine-grained, precise and comprehensive characterization of the space of planning problems. Finally, in addition to producing a formal specification we have also operationalized the ontology into a set of executable definitions, which provide a concrete reusable resource for knowledge acquisition and system development in planning applications
Automated Amortised Analysis
Steffen Jost researched a novel static program analysis that automatically infers formally guaranteed upper bounds on the use of compositional quantitative resources. The technique is based on the manual amortised complexity analysis. Inference is achieved through a type system
annotated with linear constraints. Any solution to the collected constraints yields the coefficients of a formula, that expresses an upper bound on the resource consumption of a program through the sizes of its various inputs.
The main result is the formal soundness proof of the proposed analysis for a functional language. The strictly evaluated language features higher-order types, full mutual recursion, nested data types, suspension of evaluation, and can deal with aliased data. The presentation focuses on heap space bounds. Extensions allowing the inference of bounds on stack space usage and worst-case execution time
are demonstrated for several realistic program examples. These bounds were inferred by the created generic implementation of the technique. The implementation is highly efficient, and solves even large examples within seconds.Steffen Jost stellt eine neuartige statische Programmanalyse vor, welche vollautomatisch Schranken an den Verbrauch quantitativer Ressourcen berechnet. Die Grundidee basiert auf der Technik der Amortisierten Komplexitätsanalyse, deren nicht-triviale Automatisierung durch ein erweitertes Typsystem erreicht wird. Das Typsystem berechnet als Nebenprodukt ein lineares Gleichungssystem, dessen Lösungen Koeffizienten für lineare Formeln liefern. Diese Formeln stellen garantierte obere Schranken an den Speicher- oder Zeitverbrauch des analysierten Programms dar, in Abhängigkeit von den verschiedenen Eingabegrößen des Programms. Die Relevanz der einzelnen Eingabegrößen auf den Ressourcenverbrauch
wird so deutlich beziffert.
Die formale Korrektheit der Analyse wird für eine funktionale Programmiersprache bewiesen. Die strikte Sprache erlaubt: Typen höherer Ordnung, volle Rekursion, verschachtelte Datentypen, explizites Aufschieben der Auswertung und Aliasing. Die formale Beschreibung der Analyse befasst sich primär mit dem Verbrauch von dynamischen Speicherplatz. Für eine Reihe von realistischen Programmbeispielen wird demonstriert, dass die angefertigte generische Implementation auch gute Schranken an den Verbrauch von Stapelspeicher und der maximalen Ausführungszeit ermitteln kann. Die Analyse ist sehr effizient implementierbar, und behandelt auch größere Beispielprogramme vollständig in wenigen Sekunden
Polydispersity and optimal relaxation in the hard sphere fluid
We consider the mass heterogeneity in a gas of polydisperse hard particles as
a key to optimizing a dynamical property: the kinetic relaxation rate. Using
the framework of the Boltzmann equation, we study the long time approach of a
perturbed velocity distribution toward the equilibrium Maxwellian solution. We
work out the cases of discrete as well as continuous distributions of masses,
as found in dilute fluids of mesoscopic particles such as granular matter and
colloids. On the basis of analytical and numerical evidence, we formulate a
dynamical equipartition principle that leads to the result that no such
continuous dispersion in fact minimizes the relaxation time, as the global
optimum is characterized by a finite number of species. This optimal mixture is
found to depend on the dimension d of space, ranging from five species for d=1
to a single one for d>=4. The role of the collisional kernel is also discussed,
and extensions to dissipative systems are shown to be possible.Comment: 20 pages, 8 figures, 3 table
Constraint Satisfaction Problems over Numeric Domains
We present a survey of complexity results for constraint satisfaction problems (CSPs) over the integers, the rationals, the reals, and the complex numbers. Examples of such problems are feasibility of linear programs, integer linear programming, the max-atoms problem, Hilbert\u27s tenth problem, and many more. Our particular focus is to identify those CSPs that can be solved in polynomial time, and to distinguish them from CSPs that are NP-hard. A very helpful tool for obtaining complexity classifications in this context is the concept of a polymorphism from universal algebra
A Machine learning approach to POS tagging
We have applied inductive learning of statistical decision trees
and relaxation labelling to the Natural Language Processing (NLP)
task of morphosyntactic disambiguation (Part Of Speech Tagging).
The learning process is supervised and obtains a language
model oriented to resolve POS ambiguities. This model consists
of a set of statistical decision trees expressing distribution of
tags and words in some relevant contexts.
The acquired language models are complete enough to be directly
used as sets of POS disambiguation rules, and include more complex
contextual information than simple collections of n-grams usually
used in statistical taggers.
We have implemented a quite simple and fast tagger that has been
tested and evaluated on the Wall Street Journal (WSJ) corpus with
a remarkable accuracy.
However, better results can be obtained by translating the trees
into rules to feed a flexible relaxation labelling based tagger.
In this direction we describe a tagger which is able to use
information of any kind (n-grams, automatically acquired constraints,
linguistically motivated manually written constraints, etc.), and in
particular to incorporate the machine learned decision trees.
Simultaneously, we address the problem of tagging when only
small training material is available, which is crucial in any process
of constructing, from scratch, an annotated corpus. We show that quite
high accuracy can be achieved with our system in this situation.Postprint (published version
Multi-Resolution Functional ANOVA for Large-Scale, Many-Input Computer Experiments
The Gaussian process is a standard tool for building emulators for both
deterministic and stochastic computer experiments. However, application of
Gaussian process models is greatly limited in practice, particularly for
large-scale and many-input computer experiments that have become typical. We
propose a multi-resolution functional ANOVA model as a computationally feasible
emulation alternative. More generally, this model can be used for large-scale
and many-input non-linear regression problems. An overlapping group lasso
approach is used for estimation, ensuring computational feasibility in a
large-scale and many-input setting. New results on consistency and inference
for the (potentially overlapping) group lasso in a high-dimensional setting are
developed and applied to the proposed multi-resolution functional ANOVA model.
Importantly, these results allow us to quantify the uncertainty in our
predictions. Numerical examples demonstrate that the proposed model enjoys
marked computational advantages. Data capabilities, both in terms of sample
size and dimension, meet or exceed best available emulation tools while meeting
or exceeding emulation accuracy
- …