26 research outputs found

    On input read-modes of alternating Turing machines

    Get PDF
    AbstractA number of input read-modes of Turing machines have appeared in the literature. To investigate the differences among these input read-modes, we study log-time alternating Turing machines of constant alternations. For each fixed integer k ⩾ 1 and for each read-mode, a precise circuit characterization is established for log-time alternating Turing machines of k alternations, which is a nontrivial refinement of Ruzzo's circuit characterization of alternating Turing machines. These circuit characterizations indicate clearly the differences among the input read-modes. Complete languages in strong sense for each level of the log-time hierarchy are presented, refining a result by Buss. An application of these results to computational optimization problems is described

    Algorithmic ramifications of prefetching in memory hierarchy

    Get PDF
    External Memory models, most notable being the I-O Model [3], capture the effects of memory hierarchy and aid in algorithm design. More than a decade of architectural advancements have led to new features not captured in the I-O model - most notably the prefetching capability. We propose a relatively simple Prefetch model that incorporates data prefetching in the traditional I-O models and show how to design algorithms that can attain close to peak memory bandwidth. Unlike (the inverse of) memory latency, the memory bandwidth is much closer to the processing speed, thereby, intelligent use of prefetching can considerably mitigate the I-O bottleneck. For some fundamental problems, our algorithms attain running times approaching that of the idealized Random Access Machines under reasonable assumptions. Our work also explains the significantly superior performance of the I-O efficient algorithms in systems that support prefetching compared to ones that do not

    Evaluating Multicore Algorithms on the Unified Memory Model

    Get PDF
    One of the challenges to achieving good performance on multicore architectures is the effective utilization of the underlying memory hierarchy. While this is an issue for single-core architectures, it is a critical problem for multicore chips. In this paper, we formulate the unified multicore model (UMM) to help understand the fundamental limits on cache performance on these architectures. The UMM seamlessly handles different types of multiple-core processors with varying degrees of cache sharing at different levels. We demonstrate that our model can be used to study a variety of multicore architectures on a variety of applications. In particular, we use it to analyze an option pricing problem using the trinomial model and develop an algorithm for it that has near-optimal memory traffic between cache levels. We have implemented the algorithm on a two Quad-Core Intel Xeon 5310 1.6 GHz processors (8 cores). It achieves a peak performance of 19.5 GFLOPs, which is 38% of the theoretical peak of the multicore system. We demonstrate that our algorithm outperforms compiler-optimized and auto-parallelized code by a factor of up to 7.5

    Evaluating Multicore Algorithms on the Unified Memory Model

    Get PDF
    One of the challenges to achieving good performance on multicore architectures is the effective utilization of the underlying memory hierarchy. While this is an issue for single-core architectures, it is a critical problem for multicore chips. In this paper, we formulate the unified multicore model (UMM) to help understand the fundamental limits on cache performance on these architectures. The UMM seamlessly handles different types of multiple-core processors with varying degrees of cache sharing at different levels. We demonstrate that our model can be used to study a variety of multicore architectures on a variety of applications. In particular, we use it to analyze an option pricing problem using the trinomial model and develop an algorithm for it that has near-optimal memory traffic between cache levels. We have implemented the algorithm on a two Quad-Core Intel Xeon 5310 1.6 GHz processors (8 cores). It achieves a peak performance of 19.5 GFLOPs, which is 38% of the theoretical peak of the multicore system. We demonstrate that our algorithm outperforms compiler-optimized and auto-parallelized code by a factor of up to 7.5

    Approximately Optimum Search Trees in External Memory Models

    Get PDF
    We examine optimal and near optimal solutions to the classic binary search tree problem of Knuth. We are given a set of n keys (originally known as words), B_1, B_2, ..., B_n and 2n+1 frequencies. {p_1, p_2, ..., p_n} represent the probabilities of searching for each given key, and {q_0, q_1, ..., q_n} represent the probabilities of searching in the gaps between and outside of these keys. We have that Σ_{i=0}^n q_i + Σ_{i=1}^n p_i = 1. We also assume without loss of generality that q_{i-1}+p_i+q_i != 0 for any i ϵ {1,...,n}. The keys must make up the internal nodes of the tree while the gaps make up the leaves. Our goal is to construct a binary search tree such that expected cost of search is minimized. First, we re-examine an approximate solution of Guttler, Mehlhorn and Schneider which was shown to have a worst case bound of c * H + 2 where c >= 1/(H(1/3,2/3)) ~ 1.08, and H = Σ_{i=1}^{n} p_i * log_2(1/p_i) + Σ_{j=0}^{n} q_i * log_2(1/q_j) is the entropy of the distribution. We give an improved worst case bound on the heuristic of H+4. Next, we examine the optimum binary search tree problem under a model of external memory. We use the Hierarchical Memory Model of Aggarwal et al. The model has an unlimited number of registers, R_1, R_2, ... each with its own location in memory (a positive integer). We have a set of memory sizes m_1, m_2, ..., m_l which are monotonically increasing. Each memory level has a finite size except m_l which we assume has infinite size. Each memory level has an associated cost of access c_1, c_2, ..., c_l. We assume that c_1 < c_2 < ... < c_l. We propose two approximate solutions which run in O(n) time where n is the number of words in our data set. Using these methods, we improve upon a bound given in Thite's 2001 thesis under the related HMM_2 model in the approximate setting. We also examine the related problem of binary trees on multisets of probabilities where keys are unordered and we do not differentiate between which probabilities must be leaves, and which must be internal nodes. We provide a simple O(n log_2(n)) algorithm that is within an additive (n+1)(2n) of optimal on a multiset of n keys
    corecore