4,126 research outputs found
Refining complexity analyses in planning by exploiting the exponential time hypothesis
The use of computational complexity in planning, and in AI in general, has always been a disputed topic. A major problem with ordinary worst-case analyses is that they do not provide any quantitative information: they do not tell us much about the running time of concrete algorithms, nor do they tell us much about the running time of optimal algorithms. We address problems like this by presenting results based on the exponential time hypothesis (ETH), which is a widely accepted hypothesis concerning the time complexity of 3-SAT. By using this approach, we provide, for instance, almost matching upper and lower bounds onthe time complexity of propositional planning.Funding Agencies|National Graduate School in Computer Science (CUGS), Sweden; Swedish Research Council (VR) [621-2014-4086]</p
Algorithms and Conditional Lower Bounds for Planning Problems
We consider planning problems for graphs, Markov decision processes (MDPs),
and games on graphs. While graphs represent the most basic planning model, MDPs
represent interaction with nature and games on graphs represent interaction
with an adversarial environment. We consider two planning problems where there
are k different target sets, and the problems are as follows: (a) the coverage
problem asks whether there is a plan for each individual target set, and (b)
the sequential target reachability problem asks whether the targets can be
reached in sequence. For the coverage problem, we present a linear-time
algorithm for graphs and quadratic conditional lower bound for MDPs and games
on graphs. For the sequential target problem, we present a linear-time
algorithm for graphs, a sub-quadratic algorithm for MDPs, and a quadratic
conditional lower bound for games on graphs. Our results with conditional lower
bounds establish (i) model-separation results showing that for the coverage
problem MDPs and games on graphs are harder than graphs and for the sequential
reachability problem games on graphs are harder than MDPs and graphs; (ii)
objective-separation results showing that for MDPs the coverage problem is
harder than the sequential target problem.Comment: Accepted at ICAPS'1
Accelerating Science: A Computing Research Agenda
The emergence of "big data" offers unprecedented opportunities for not only
accelerating scientific advances but also enabling new modes of discovery.
Scientific progress in many disciplines is increasingly enabled by our ability
to examine natural phenomena through the computational lens, i.e., using
algorithmic or information processing abstractions of the underlying processes;
and our ability to acquire, share, integrate and analyze disparate types of
data. However, there is a huge gap between our ability to acquire, store, and
process data and our ability to make effective use of the data to advance
discovery. Despite successful automation of routine aspects of data management
and analytics, most elements of the scientific process currently require
considerable human expertise and effort. Accelerating science to keep pace with
the rate of data acquisition and data processing calls for the development of
algorithmic or information processing abstractions, coupled with formal methods
and tools for modeling and simulation of natural processes as well as major
innovations in cognitive tools for scientists, i.e., computational tools that
leverage and extend the reach of human intellect, and partner with humans on a
broad range of tasks in scientific discovery (e.g., identifying, prioritizing
formulating questions, designing, prioritizing and executing experiments
designed to answer a chosen question, drawing inferences and evaluating the
results, and formulating new questions, in a closed-loop fashion). This calls
for concerted research agenda aimed at: Development, analysis, integration,
sharing, and simulation of algorithmic or information processing abstractions
of natural processes, coupled with formal methods and tools for their analyses
and simulation; Innovations in cognitive tools that augment and extend human
intellect and partner with humans in all aspects of science.Comment: Computing Community Consortium (CCC) white paper, 17 page
The Hypotheses Testing Method for Evaluation of Startup Projects
This paper suggests new perspective to evaluating innovation projects and understanding the nature of startup risks. Author consider five principal hypotheses that underlie every innovative project that comprise a bunch of respective assumptions to manage startup risks in a proactive manner. Suggested approach spots the light on a project’s uncertainties and risks, embedded investment and managerial options, and enables more comprehensive and accurate evaluation of innovation. The Hypotheses Testing Method enables to estimate risks and attractiveness of a startup project in a clear and fast manner. It replaces unclear traditional techniques like NPV and DCF, avoiding heavy cash flow modelling
Formally Verified Compositional Algorithms for Factored Transition Systems
Artificial Intelligence (AI) planning and model checking are two
disciplines that found wide practical applications.
It is often the case that a problem in those two fields concerns
a transition system whose behaviour can be encoded in a digraph
that models the system's state space.
However, due to the very large size of state spaces of realistic
systems, they are compactly represented as propositionally
factored transition systems.
These representations have the advantage of being exponentially
smaller than the state space of the represented system.
Many problems in AI~planning and model checking involve questions
about state spaces, which correspond to graph theoretic questions
on digraphs modelling the state spaces.
However, existing techniques to answer those graph theoretic
questions effectively require, in the worst case, constructing
the digraph that models the state space, by expanding the
propositionally factored representation of the syste\
m.
This is not practical, if not impossible, in many cases because
of the state space size compared to the factored representation.
One common approach that is used to avoid constructing the state
space is the compositional approach, where only smaller
abstractions of the system at hand are processed and the given
problem (e.g. reachability) is solved for them.
Then, a solution for the problem on the concrete system is
derived from the solutions of the problem on the abstract
systems.
The motivation of this approach is that, in the worst case, one
need only construct the state spaces of the abstractions which
can be exponentially smaller than the state space of the concrete
system.
We study the application of the compositional approach to two
fundamental problems on transition systems: upper-bounding the
topological properties (e.g. the largest distance between any two
states, i.e. the diameter) of the state spa\
ce, and computing reachability between states.
We provide new compositional algorithms to solve both problems by
exploiting different structures of the given system.
In addition to the use of an existing abstraction (usually
referred to as projection) based on removing state space
variables, we develop two new abstractions for use within our
compositional algorithms.
One of the new abstractions is also based on state variables,
while the other is based on assignments to state variables.
We theoretically and experimentally show that our new
compositional algorithms improve the state-of-the-art in solving
both problems, upper-bounding state space topological parameters
and reachability.
We designed the algorithms as well as formally verified them with
the aid of an interactive theorem prover.
This is the first application that we are aware of, for such a
theorem prover based methodology to the design of new algorithms
in either AI~planning or model checking
CBR and MBR techniques: review for an application in the emergencies domain
The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system.
RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to:
a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions
b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location.
In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations.
This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version
- …