18 research outputs found

    Avoiding Rotated Bitboards with Direct Lookup

    Full text link
    This paper describes an approach for obtaining direct access to the attacked squares of sliding pieces without resorting to rotated bitboards. The technique involves creating four hash tables using the built in hash arrays from an interpreted, high level language. The rank, file, and diagonal occupancy are first isolated by masking the desired portion of the board. The attacked squares are then directly retrieved from the hash tables. Maintaining incrementally updated rotated bitboards becomes unnecessary as does all the updating, mapping and shifting required to access the attacked squares. Finally, rotated bitboard move generation speed is compared with that of the direct hash table lookup method.Comment: 7 pages, 1 figure, 4 listings; replaced test positions, fixed typo

    Fast N-Gram Language Model Look-Ahead for Decoders With Static Pronunciation Prefix Trees

    Get PDF
    Decoders that make use of token-passing restrict their search space by various types of token pruning. With use of the Language Model Look-Ahead (LMLA) technique it is possible to increase the number of tokens that can be pruned without loss of decoding precision. Unfortunately, for token passing decoders that use single static pronunciation prefix trees, full n-gram LMLA increases the needed number of language model probability calculations considerably. In this paper a method for applying full n-gram LMLA in a decoder with a single static pronunciation tree is introduced. The experiments show that this method improves the speed of the decoder without an increase of search errors.\u

    A multiscale approximation scheme for explicit model predictive control with stability, feasibility, and performance guarantees

    Get PDF
    In this paper, an algorithm is introduced based on classical wavelet multiresolution analysis that returns a low complexity explicit model predictive control law built on a hierarchy of second order interpolating wavelets. It is proven that the resulting interpolation is everywhere feasible. Further, tests to confirm stability and to compute a bound on the performance loss are introduced. Since the controller approximation is built on a gridded hierarchy, the evaluation of the control law in real-time systems is naturally fast and runs in a bounded logarithmic time. A simple example is provided which both illustrates the approach and motivates further research in this direction

    A Compact Cache-Efficient Function Store with Constant Evaluation Time

    Get PDF
    A new data structure to store a set of key-value mappings for finite static key sets is presented. The data structure, which is called Cache-Efficient Function Stores (CEFS), can be built in linear expected time and supports evaluation for a key within worst-case constant time. Furthermore, (i) the building process can be parallelized to achieve massive speed-up over known methods; (ii) an evaluation needs less than two cache misses in average case for many applications, improving upon all known methods. The data structure is also compact, needing only O(n) bits extra space to be stored. The data structure is flexible in that there are many parameters that can be configured in order to fit in specific applications. The time and space properties for different parameters can be predicted to great precision in advance with formulae developed in the thesis. It is also possible to automate the selection of parameters. Experiments have shown the efficiency of the new data structure and confirmed the theoretical analysis

    Area-efficient near-associative memories on FPGAs

    Full text link
    corecore