38,344 research outputs found
The FastMap Algorithm for Shortest Path Computations
We present a new preprocessing algorithm for embedding the nodes of a given
edge-weighted undirected graph into a Euclidean space. The Euclidean distance
between any two nodes in this space approximates the length of the shortest
path between them in the given graph. Later, at runtime, a shortest path
between any two nodes can be computed with A* search using the Euclidean
distances as heuristic. Our preprocessing algorithm, called FastMap, is
inspired by the data mining algorithm of the same name and runs in near-linear
time. Hence, FastMap is orders of magnitude faster than competing approaches
that produce a Euclidean embedding using Semidefinite Programming. FastMap also
produces admissible and consistent heuristics and therefore guarantees the
generation of shortest paths. Moreover, FastMap applies to general undirected
graphs for which many traditional heuristics, such as the Manhattan Distance
heuristic, are not well defined. Empirically, we demonstrate that A* search
using the FastMap heuristic is competitive with A* search using other
state-of-the-art heuristics, such as the Differential heuristic
Hierarchy construction schemes within the Scale set framework
Segmentation algorithms based on an energy minimisation framework often
depend on a scale parameter which balances a fit to data and a regularising
term. Irregular pyramids are defined as a stack of graphs successively reduced.
Within this framework, the scale is often defined implicitly as the height in
the pyramid. However, each level of an irregular pyramid can not usually be
readily associated to the global optimum of an energy or a global criterion on
the base level graph. This last drawback is addressed by the scale set
framework designed by Guigues. The methods designed by this author allow to
build a hierarchy and to design cuts within this hierarchy which globally
minimise an energy. This paper studies the influence of the construction scheme
of the initial hierarchy on the resulting optimal cuts. We propose one
sequential and one parallel method with two variations within both. Our
sequential methods provide partitions near the global optima while parallel
methods require less execution times than the sequential method of Guigues even
on sequential machines
Chunks hierarchies and retrieval structures: Comments on Saariluoma and Laine
The empirical results of Saariluoma and Laine (in press) are discussed and their computer simulations are compared with CHREST, a computational model of perception, memory and learning in chess. Mathematical functions such as power functions and logarithmic functions account for Saariluoma and Laine's (in press) correlation heuristic and for CHREST very well. However, these functions fit human data well only with game positions, not with random positions. As CHREST, which learns using spatial proximity, accounts for the human data as well as Saariluoma and Laine's (in press) correlation heuristic, their conclusion that frequency-based heuristics match the data better than proximity-based heuristics is questioned. The idea of flat chunk organisation and its relation to retrieval structures is discussed. In the conclusion, emphasis is given to the need for detailed empirical data, including information about chunk structure and types of errors, for discriminating between various learning algorithms
Combining factual and heuristic knowledge in knowledge acquisition
A knowledge acquisition technique that combines heuristic and factual knowledge represented as two hierarchies is described. These ideas were applied to the construction of a knowledge acquisition interface to the Expert System Analyst (OPERA). The goal of OPERA is to improve the operations support of the computer network in the space shuttle launch processing system. The knowledge acquisition bottleneck lies in gathering knowledge from human experts and transferring it to OPERA. OPERA's knowledge acquisition problem is approached as a classification problem-solving task, combining this approach with the use of factual knowledge about the domain. The interface was implemented in a Symbolics workstation making heavy use of windows, pull-down menus, and other user-friendly devices
Practical Reasoning for Very Expressive Description Logics
Description Logics (DLs) are a family of knowledge representation formalisms
mainly characterised by constructors to build complex concepts and roles from
atomic ones. Expressive role constructors are important in many applications,
but can be computationally problematical. We present an algorithm that decides
satisfiability of the DL ALC extended with transitive and inverse roles and
functional restrictions with respect to general concept inclusion axioms and
role hierarchies; early experiments indicate that this algorithm is well-suited
for implementation. Additionally, we show that ALC extended with just
transitive and inverse roles is still in PSPACE. We investigate the limits of
decidability for this family of DLs, showing that relaxing the constraints
placed on the kinds of roles used in number restrictions leads to the
undecidability of all inference problems. Finally, we describe a number of
optimisation techniques that are crucial in obtaining implementations of the
decision procedures, which, despite the worst-case complexity of the problem,
exhibit good performance with real-life problems
- …