3,155 research outputs found
Portfolio-based Planning: State of the Art, Common Practice and Open Challenges
In recent years the field of automated planning has significantly
advanced and several powerful domain-independent
planners have been developed. However, none of these systems
clearly outperforms all the others in every known
benchmark domain. This observation motivated the idea of
configuring and exploiting a portfolio of planners to perform
better than any individual planner: some recent planning systems
based on this idea achieved significantly good results in
experimental analysis and International Planning Competitions.
Such results let us suppose that future challenges of the
Automated Planning community will converge on designing
different approaches for combining existing planning algorithms.
This paper reviews existing techniques and provides an exhaustive
guide to portfolio-based planning. In addition, the
paper outlines open issues of existing approaches and highlights
possible future evolution of these techniques
ASAP: An Automatic Algorithm Selection Approach for Planning
Despite the advances made in the last decade in automated planning, no planner out-
performs all the others in every known benchmark domain. This observation motivates
the idea of selecting different planning algorithms for different domains. Moreover, the
planners’ performances are affected by the structure of the search space, which depends
on the encoding of the considered domain. In many domains, the performance of a plan-
ner can be improved by exploiting additional knowledge, for instance, in the form of
macro-operators or entanglements.
In this paper we propose ASAP, an automatic Algorithm Selection Approach for
Planning that: (i) for a given domain initially learns additional knowledge, in the form
of macro-operators and entanglements, which is used for creating different encodings
of the given planning domain and problems, and (ii) explores the 2 dimensional space
of available algorithms, defined as encodings–planners couples, and then (iii) selects the
most promising algorithm for optimising either the runtimes or the quality of the solution
plans
Exploring the Synergy between two Modular Learning Techniques for Automated Planning
In the last decade the emphasis on improving the operational
performance of domain independent automated planners
has been in developing complex techniques which merge
a range of different strategies. This quest for operational advantage,
driven by the regular international planning competitions,
has not made it easy to study, understand and predict
what combinations of techniques will have what effect
on a planner’s behaviour in a particular application domain.
In this paper, we consider two machine learning techniques
for planner performance improvement, and exploit a modular
approach to their combination in order to facilitate the analysis
of the impact of each individual component. We believe
this can contribute to the development of more transparent
planning engines, which are designed using modular, interchangeable,
and well-founded components. Specifically, we
combined two previously unrelated learning techniques, entanglements
and relational decision trees, to guide a “vanilla”
search algorithm. We report on a large experimental analysis
which demonstrates the effectiveness of the approach in terms
of performance improvements, resulting in a very competitive
planning configuration despite the use of a more modular and
transparent architecture. This gives insights on the strengths
and weaknesses of the considered approaches, that will help
their future exploitation
Bayesian robot Programming
We propose a new method to program robots based on Bayesian inference and learning. The capacities of this programming method are demonstrated through a succession of increasingly complex experiments. Starting from the learning of simple reactive behaviors, we present instances of behavior combinations, sensor fusion, hierarchical behavior composition, situation recognition and temporal sequencing. This series of experiments comprises the steps in the incremental development of a complex robot program. The advantages and drawbacks of this approach are discussed along with these different experiments and summed up as a conclusion. These different robotics programs may be seen as an illustration of probabilistic programming applicable whenever one must deal with problems based on uncertain or incomplete knowledge. The scope of possible applications is obviously much broader than robotics
Exploiting Block Deordering for Improving Planners Efficiency
Capturing and exploiting structural knowledge of
planning problems has shown to be a successful
strategy for making the planning process more ef-
ficient. Plans can be decomposed into its constituent
coherent subplans, called blocks, that encapsulate
some effects and preconditions, reducing
interference and thus allowing more deordering
of plans. According to the nature of blocks, they
can be straightforwardly transformed into useful
macro-operators (shortly, “macros”). Macros are
well known and widely studied kind of structural
knowledge because they can be easily encoded in
the domain model and thus exploited by standard
planning engines.
In this paper, we introduce a method, called
BLOMA, that learns domain-specific macros from
plans, decomposed into “macro-blocks” which are
extensions of blocks, utilising structural knowledge
they capture. In contrast to existing macro learning
techniques, macro-blocks are often able to capture
high-level activities that form a basis for useful
longer macros (i.e. those consisting of more original
operators). Our method is evaluated by using
the IPC benchmarks with state-of-the-art planning
engines, and shows considerable improvement in
many cases
A Planning-based Approach for Music Composition
. Automatic music composition is a fascinating field within computational
creativity. While different Artificial Intelligence techniques have been used
for tackling this task, Planning – an approach for solving complex combinatorial
problems which can count on a large number of high-performance systems and
an expressive language for describing problems – has never been exploited.
In this paper, we propose two different techniques that rely on automated planning
for generating musical structures. The structures are then filled from the bottom
with “raw” musical materials, and turned into melodies. Music experts evaluated
the creative output of the system, acknowledging an overall human-enjoyable
trait of the melodies produced, which showed a solid hierarchical structure and a
strong musical directionality. The techniques proposed not only have high relevance
for the musical domain, but also suggest unexplored ways of using planning
for dealing with non-deterministic creative domains
Multi-objective optimisation of machine tool error mapping using automated planning
Error mapping of machine tools is a multi-measurement task that is planned based on expert knowledge. There are no intelligent tools aiding the production of optimal measurement plans. In previous work, a method of intelligently constructing measurement plans demonstrated that it is feasible to optimise the plans either to reduce machine tool downtime or the estimated uncertainty of measurement due to the plan schedule. However, production scheduling and a continuously changing environment can impose conflicting constraints on downtime and the uncertainty of measurement. In this paper, the use of the produced measurement model to minimise machine tool downtime, the uncertainty of measurement and the arithmetic mean of both is investigated and discussed through the use of twelve different error mapping instances. The multi-objective search plans on average have a 3% reduction in the time metric when compared to the downtime of the uncertainty optimised plan and a 23% improvement in estimated uncertainty of measurement metric when compared to the uncertainty of the temporally optimised plan. Further experiments on a High Performance Computing (HPC) architecture demonstrated that there is on average a 3% improvement in optimality when compared with the experiments performed on the PC architecture. This demonstrates that even though a 4% improvement is beneficial, in most applications a standard PC architecture will result in valid error mapping plan
- …