158 research outputs found

    Alpha-beta pruning on evolving game trees

    Get PDF
    technical reportThe alpha-beta strategy is a widely used method for economizing on the size of game trees. Heretofore, its application has been limited to depth-first tree growth in recursive search functions. However, many modern game players use retentive (i.e. coroutine-based) control to achieve greater attention mobility in the game tree, e.g. for heuristically guided "best-first" searching. This paper reformulates the alpha-beta strategy for this generalized control setting. Algorithms are provided (in complete PASCAL code) for the following operations on appropriate nodes arbitrarily selected from a game tree: terminal node expansion, resumption of heuristically suspended move generation, tree re-rooting (i.e. top-level move selection), subtree redevelopment to satisfy a new search thoroughness condition, including restart of nodes that were cut-off but may no longer be. empirical results are presented indicating that, in addition to heuristic freedom, this method typically offers trees with fewer terminal nodes than in the recursive case, due to best-first descendant ordering, and the availability on the average of greater tree context for node cutting

    The key node method: a highly-parallel alpha-beta algorithm

    Get PDF
    Journal ArticleA new parallel formulation of the alpha-beta algorithm for minimax game tree searching is presented. Its chief characteristic is incremental information sharing among subsearch processes in the form of "provisional" node value communication. Such "eager" communication can offer the double benefit of faster search focusing and enhanced parallelism. This effect is particularly advantageous in the prevalent case when static value correlation exists among adjacent nodes. A message-passing formulation of this idea, termed the "Key Node Method", is outlined. Preliminary experimental results for this method are reported, supporting its validity and potential for increased speedup

    Efficiency in nondeterministic control through non-forgetful backtracking

    Get PDF
    Journal ArticleNondeterministic (ND) control has long been used to express elegant solutions to complex search problems. Programs using ND control can be executed on conventional machines through a systematic examination of trial execution paths. Among the many approaches to the enumeration of these paths is backtracking, a depth-first search of the execution path tree. Despite its implementational advantages, backtracking in its purest form suffers from a "forgetfulness" of retracted execution subpaths. This can lead to exponential run-time on problems such as top-down parsing in which the same subproblem can reoccur in slightly different global contexts. This paper presents an alternative form of ND control implementation incorporating "non-forgetfulness" into backtracking. Reoccurrences of previously searched subgoals are detected and their net computational effects recreated on demand. Since each distinct goal is pursued at most once, search problems such as general top-down parsing run in polynomial time. Moreover, in contrast to an exhaustive, bottom-up approach, goals are only pursued if appropriate in some global context. A strategy for non-forgetful backtracking is outlined in terms of coroutines and ordinary backtracking. The description of an alternative implementation of this strategy using simple coroutines is referenced. Top-down parsing is used to illustrate the application of this technique in both linguistic appearance and execution effect. Finally, some directions for further research into generalizations of these results are suggested

    An abstract machine for parallel graph reduction

    Get PDF
    technical reportAn abstract machine suitable for parallel graph reduction on a shared memory multiprocessor is described. Parallel programming is plagued with subtle race conditions resulting in deadlock or fatal system errors. Due to the nondeterministic nature of program execution the utilization of resources may vary from one run to another. The abstract machine has been designed for the efficient execution of normal order functional languages. The instructions proposed related to parallel activity are sensitive to load conditions and the current utilization of resources on the machine. The novel aspect of the architecture is the very simple set of instructions needed to control the complexities of parallel execution. This is an important step towards building a compiler for multiprocessor machines and to further language research in this area. Sample test programs hand coded in this instruction set show good performance on our 18 node BBN Butterfly as compared to a VAX 8600

    Combinator evaluation of functional programs with logical variables

    Get PDF
    technical reportA technique is presented that brings logical variables into the scope of the well known Turner method for evaluating normal order functioned programs by S, K, I combinator graph reduction. This extension is illustrated by SASL+LV, an extension of Turner's language SASL in which general expressions serve as formal parameters, and parameter passage is done by unification. The conceptual and practical advantages of such an extension are discussed, as well as semantic pitfalls that arise from the attendant weakening of referential transparency. Only four new combinators (LV, BV, FN and UNIFY) are introduced. The resulting object code is fully upward compatible in the sense that previously compiled SASL object code remains executable with unchanged semantics. However, "read-only" variable usage in SASL-f LV programs requires a "multi-tasking" extension of the customary stack-based evaluation method. Mechanisms are presented for managing this multi-tasking on both single and multi-processor systems. Finally, directions are examined for applying this technique to implementations involving larger granularity combinators, and fuller semantic treatment of logical variables (e.g. accommodation of failing unifications)

    Modules as values in a persistent object store

    Get PDF
    Journal ArticleWe report on an object manager (OM) providing persistent implementations for C ++ classes. Our OM generalizes this problem to that of managing persistent modules, where the module concept is an abstract data type (ADT). This approach permits a powerful suite of module manipulation operations to be applied uniformly to modules of many provenances, including non-class based entities such as conventional object files, application libraries, and shared system libraries. OMOS, a generalized linker and loader, plays a central role in our OM. Class implementations are represented by OMOS modules, which in turn are constructed from OMOS meta-objects encapsulating linkage blueprints. We cleanly solve the problems of (i) logically (but not physically) including executable object files in our OM, (ii) reconciling class inheritance history and linkage history, and (iii) supporting alternative implementations of a class, for client interoperability or version control

    ETYMA: a framework for modular systems

    Get PDF
    Journal ArticleModularity, i.e. support for the flexible construction, adaptation, and combination of units of software, is an important goal in many systems. In most cases, however, systems achieve only a few aspects of modularity. The problem can be traced to the inflexibility, or the limited view of modularity taken by the underlying architecture of these systems. As a remedy, we show that the notions fundamental to object-oriented programming, i.e. classes and inheritance, can be formulated as a simple meta-level architecture that can be effectively reused in a wide variety of contexts. We have realized such an architecture as an O-O framework, and constructed two significant and distinct completions of it. Systems based on this framework benefit not only from design and code reuse, but also from the flexibility that the architecture offers. In addition, the architecture represents a unification of the fundamental ideas of several similar but subtly different module systems

    Our LIPS are sealed: interfacing logic and functional programming systems

    Get PDF
    technical reportWe report on a technique for interfacing an untyped logic language to a statically poly morphically typed functional language Our key insight is that polymorphic types can be interpreted as "need to know" specifications on function arguments. This leads to a criterion for liberally yet safely invoking the functional language to reduce application terms as required during unification in the logic language. This method called P unification enriches the capabilities of each language while retaining the integrity of their individual semantics and implementation technologies An experimental test has been successfully performed whereby a Horn clause logic programming (HCLP) interpreter written in Common Lisp was interfaced to the Standard ML of New Jersey system. The latter implementation was employed (i) on untyped or dynamically typed data, even though it is statically typed (ii) lazily, even though it is strict and (iii) on alien HCLP terms such as unbound variables - without the slightest modification

    The schema coercion problem

    Get PDF
    Journal ArticleOver the past decade, the ability to incorporate data from a wide variety of sources has become increasingly important to database users. To meet this need, significant effort has been expended in automatic database schema manipulation. However, to date this effort has focused on two aspects of this problem: schema integration and schema evolution. Schema integration results in a unified view of several databases, while schema evolution enhances an existing database design to represent additional information. This work defines and addresses a third problem, schema coercion, which defines a mapping from one database to another. This paper presents an overview of the problems associated with schema coercion and how they correspond to the problems encountered by schema integration and schema evolution. In addition, our approach to this problem is outlined. The feasibility of this approach is demonstrated by a tool which reduces the human interaction required at all steps in the integration process. The database schemata are automatically read and converted into corresponding ER representations. Then, a correspondence identification heuristic is used to identify similar concepts, and create mappings between them. Finally, a program is generated to perform the data transfer. This tool has successfully been used to coerce the Haemophilus and Methanococcus genomes from the Genbank ASN.l database to the Utah Center for Human Genome Research database. Our comprehensive approach to addressing the schema coercion problem has proven extremely valuable in reducing the interaction required to define coercions, particularly when the heuristics are unsuccessful

    Fast and accurate NN approach for multi-event annotation of time series

    Get PDF
    technical reportSimilarity search in time-series subsequences is an important time series data mining task. Searching in time series subsequences for matches for a set of shapes is an extension of this task and is equally important. In this work we propose a simple but efficient approach for finding matches for a group of shapes or events in a given time series using a Nearest Neighbor approach. We provide various improvements of this approach including one using the GNAT data structure. We also propose a technique for finding similar shapes of widely varying temporal width. Both of these techniques for primitive shape matching allow us to more accurately and efficiently form an event representation of a time-series, leading in turn to finding complex events which are composites of primitive events. We demonstrate the robustness of our approaches in detecting complex shapes even in the presence of ?don?t care? symbols. We evaluate the success of our approach in detecting both primitive and complex shapes using a data set from the Fluid Dynamics domain. We also show a speedup of up to 5 times over a na?ve nearest neighbor approach
    • …
    corecore