669 research outputs found
ASAP: An Automatic Algorithm Selection Approach for Planning
Despite the advances made in the last decade in automated planning, no planner out-
performs all the others in every known benchmark domain. This observation motivates
the idea of selecting different planning algorithms for different domains. Moreover, the
plannersâ performances are affected by the structure of the search space, which depends
on the encoding of the considered domain. In many domains, the performance of a plan-
ner can be improved by exploiting additional knowledge, for instance, in the form of
macro-operators or entanglements.
In this paper we propose ASAP, an automatic Algorithm Selection Approach for
Planning that: (i) for a given domain initially learns additional knowledge, in the form
of macro-operators and entanglements, which is used for creating different encodings
of the given planning domain and problems, and (ii) explores the 2 dimensional space
of available algorithms, defined as encodingsâplanners couples, and then (iii) selects the
most promising algorithm for optimising either the runtimes or the quality of the solution
plans
An Automatic Algorithm Selection Approach for Planning
Despite the advances made in the last decade in automated planning, no planner outperforms all the others in every known benchmark domain. This observation motivates the idea of selecting different planning algorithms for different domains. Moreover, the planners' performances are affected by the structure of the search space, which depends on the encoding of the considered domain. In many domains, the performance of a planner can be improved by exploiting additional knowledge, extracted in the form of macro-operators or entanglements.
In this paper we propose ASAP, an automatic Algorithm Selection Approach for Planning that: (i) for a given domain initially learns additional knowledge, in the form of macro-operators and entanglements, which is used for creating different encodings of the given planning domain and problems, and (ii) explores the 2 dimensional space of available algorithms, defined as encodings--planners couples, and then (iii) selects the most promising algorithm for optimising either the runtimes or the quality of the solution plans
Using Plan Decomposition for Continuing Plan Optimisation and Macro Generation
This thesis addresses three problems in the field of classical AI planning: decomposing
a plan into meaningful subplans, continuing plan quality optimisation, and
macro generation for efficient planning. The importance and difficulty of each of
these problems is outlined below.
(1) Decomposing a plan into meaningful subplans can facilitate a number of postplan
generation tasks, including plan quality optimisation and macro generation
â the two key concerns of this thesis. However, conventional plan decomposition
techniques are often unable to decompose plans because they consider dependencies
among steps, rather than subplans.
(2) Finding high quality plans for large planning problems is hard. Planners that
guarantee optimal, or bounded suboptimal, plan quality often cannot solve them In
one experiment with the Genome Edit Distance domain optimal planners solved only
11.5% of problems. Anytime planners promise a way to successively produce better
plans over time. However, current anytime planners tend to reach a limit where they
stop finding any further improvement, and the plans produced are still very far from
the best possible. In the same experiment, the LAMA anytime planner solved all
problems but found plans whose average quality is 1.57 times worse than the best
known.
(3) Finding solutions quickly or even finding any solution for large problems
within some resource constraint is also difficult. The best-performing planner in
the 2014 international planning competition still failed to solve 29.3% of problems.
Re-engineering a domain model by capturing and exploiting structural knowledge
in the form of macros has been found very useful in speeding up planners. However,
existing planner independent macro generation techniques often fail to capture
some promising macro candidates because the constituent actions are not found in
sequence in the totally ordered training plans.
This thesis contributes to plan decomposition by developing a new plan deordering
technique, named block deordering, that allows two subplans to be unordered
even when their constituent steps cannot. Based on the block-deordered
plan, this thesis further contributes to plan optimisation and macro generation, and
their implementations in two systems, named BDPO2 and BloMa. Key to BDPO2
is a decomposition into subproblems of improving parts of the current best plan,
rather than the plan as a whole. BDPO2 can be seen as an application of the large
neighbourhood search strategy to planning. We use several windowing strategies to
extract subplans from the block deordering of the current plan, and on-line learning
for applying the most promising subplanners to the most promising subplans.
We demonstrate empirically that even starting with the best plans found by other
means, BDPO2 is still able to continue improving plan quality, and often produces better plans than other anytime planners when all are given enough runtime. BloMa
uses an automatic planner independent technique to extract and filter âself-containeâ
subplans as macros from the block deordered training plans. These macros represent
important longer activities useful to improve planners coverage and efficiency
compared to the traditional macro generation approaches
Recommended from our members
The Political Economy of Transnational Drug Trafficking: Criminal Rackets and State-Making in Modern Mexico
Far from embodying distinct social actors, the line separating the âpoliceâ from the âcriminalâ is historically fluid and at times very thin. Generated by the capitalisation of economic relations, waves of bandits and criminals have often been instrumental to advance the interests of their enabling economic and political elites by forming the security apparatuses (reliant on preying, delinquency and extortion) supporting the elites' hegemony. Mexicans, at multiple stages in the country's national history, have become well-acquainted with the blend of legality and illegality characterising the countryâs security sector. Building from historical sociology, comparative studies and critical approaches to policing, this thesis argues that criminal activities (in particular contraband and drug trafficking) were important political economies supporting the development of the state security apparatus under the PRI regime in Mexico (1940s to 1990s). The thesis documents the paradoxical but regular input of criminal markets into the political economies of pacification, policing and state repression, taking place at crucial junctures in the history of the single-party state, and assisting the production of its particular socioeconomic order. This âinstrumentalisationâ of transnational criminal markets connects with and replicates little-studied Cold War security dynamics whereby the reach of the U.S. security apparatus (global policing, paramilitarism, counterinsurgency, dirty wars, etc.) was expanded by tapping into criminal activity in host nations. Building from the Mexican experience, the thesis argues that state rackets in (transnational) crime generated political economies that, embedded into local processes, played a notable part in the making of capitalist modernity, liberal state making and empire. The thesis documents in particular the ancillary role of drug and contraband markets in the operation of the PRIâs central security bodies, the DirecciĂłn Federal de Seguridad and the PolicĂa Judicial Federal. Drawing from multi-archival research and unprecedented testimonies by former law enforcement agents, the thesis provides a new framework to grasp the important role of criminal-police entanglements in the making of Mexican modernity.Conacyt, Cambridge Trust, CLAS, St. Catharineâs College, Cambridge Political Economy Society
A Practitionerâs Guide to Applied Sustainability: Initial Explorations
For decades, coal has been king in central Appalachia. The people of this region have devoted their lives to providing energy to the nation, fueling the first and second industrial revolutions and providing nearly 40 percent of the energy used in the United States today. Known as one of the unhealthiest communities in the nation, the city of Williamson, located in southern West Virginia, is working to encourage healthy living by diversifying its energy portfolio, providing new economic opportunities for businesses, creating a strong workforce with competitive skill sets, growing local food systems to encourage healthy living, and increasing the quality of life for this community. Operating under the banner of âSustainable Williamsonâ and utilizing the emerging concept of applied sustainability, this community is developing a âpraxis of theoryâ approach with a specific focus upon the socio-economic effects of ideology. This thesis explores the theoretical intersections between ideology and new materialism in order to provide existing and emerging practitioners of applied sustainability with an initial framework for developing successful projects in central Appalachia and beyond
Design Ltd.: Renovated Myths for the Development of Socially Embedded Technologies
This paper argues that traditional and mainstream mythologies, which have
been continually told within the Information Technology domain among designers
and advocators of conceptual modelling since the 1960s in different fields of
computing sciences, could now be renovated or substituted in the mould of more
recent discourses about performativity, complexity and end-user creativity that
have been constructed across different fields in the meanwhile. In the paper,
it is submitted that these discourses could motivate IT professionals in
undertaking alternative approaches toward the co-construction of
socio-technical systems, i.e., social settings where humans cooperate to reach
common goals by means of mediating computational tools. The authors advocate
further discussion about and consolidation of some concepts in design research,
design practice and more generally Information Technology (IT) development,
like those of: task-artifact entanglement, universatility (sic) of End-User
Development (EUD) environments, bricolant/bricoleur end-user, logic of
bricolage, maieuta-designers (sic), and laissez-faire method to socio-technical
construction. Points backing these and similar concepts are made to promote
further discussion on the need to rethink the main assumptions underlying IT
design and development some fifty years later the coming of age of software and
modern IT in the organizational domain.Comment: This is the peer-unreviewed of a manuscript that is to appear in D.
Randall, K. Schmidt, & V. Wulf (Eds.), Designing Socially Embedded
Technologies: A European Challenge (2013, forthcoming) with the title
"Building Socially Embedded Technologies: Implications on Design" within an
EUSSET editorial initiative (www.eusset.eu/
Verification and Validation of Planning Domain Models
The verification and validation of planning domain models is one of the biggest challenges to deploying planning-based automated systems in the real world.The state-of-the-art verification methods of planning domain models are vulnerable to false positives, i.e. counterexamples that are unreachable by sound planners when using the domain under verification during planning tasks. False positives mislead designers into believing correct models are faulty. Consequently, designers needlessly debug correct models to remove these false positives. This process might unnecessarily constrain planning domain models, which can eradicate valid and sometimes required behaviours. Moreover, catching and debugging errors without knowing they are false positives can give verification engineers a false sense of achievement, which might cause them to overlook valid errors.To address this shortfall, the first part of this thesis introduces goal-constrained planning domain model verification, a novel approach that constrains the verification of planning domain models with planning goals to reduce the number of unreachable planning counterexamples. This thesis formally proves the correctness of this method and demonstrates the application of this approach using the model checker Spin and the planner MIPS-XXL. Furthermore, it reports the empirical experiments that validate the feasibility and investigates the performance of the goal-constrained verification approach. The experiments show that not only the goal-constrained verification method is robust against false positive errors, but it also outperforms under-constrained verification tasks in terms of time and memory in some cases.The second part of this thesis investigates the problem of validating the functional equivalence of planning domain models. The need for techniques to validate the functional equivalence of planning domain models has been highlighted in previous research and has applications in model learning, development and extension. Despite the need and importance of proving the functional equivalence of planning domain models, this problem attracted limited research interest.This thesis builds on and extends previous research by proposing a novel approach to validate the functional equivalence of planning domain models. First, this approach employs a planner to remove redundant operators from the given domain models; then, it uses a Satisfiability Modulo Theories (SMT) solver to check if a predicate mapping exists between the two domain models that makes them functionally equivalent. The soundness and completeness of this functional equivalence validation method are formally proven in this thesis.Furthermore, this thesis introduces D-VAL, the first planning domain model automatic validation tool. D-VAL uses the FF planner and the Z3 SMT solver to prove the functional equivalence of planning domain models. Moreover, this thesis demonstrates the feasibility and evaluates the performance of D-VAL against thirteen planning domain models from the International Planning Competition (IPC). Empirical evaluation shows that D-VAL validates the functional equivalence of the most challenging task in less than 43 seconds. These experiments and their results provide a benchmark to evaluate the feasibility and performance of future related work
Status reports of the fisheries and aquatic resources of Western Australia 2018/19. State of the fisheries
The Status Reports of the Fisheries and Aquatic Resources of Western Australia (SRFAR) provide an annual update on the state of the fish stocks and other aquatic resources of Western Australia (WA). These reports outline the most recent assessments of the cumulative risk status for each of the aquatic resources (assets) within WAâs six Bioregions using an Ecosystem Based Fisheries Management (EBFM) approach.https://researchlibrary.agric.wa.gov.au/an_sofar/1011/thumbnail.jp
- âŚ