5,995 research outputs found

    BriskStream: Scaling Data Stream Processing on Shared-Memory Multicore Architectures

    Full text link
    We introduce BriskStream, an in-memory data stream processing system (DSPSs) specifically designed for modern shared-memory multicore architectures. BriskStream's key contribution is an execution plan optimization paradigm, namely RLAS, which takes relative-location (i.e., NUMA distance) of each pair of producer-consumer operators into consideration. We propose a branch and bound based approach with three heuristics to resolve the resulting nontrivial optimization problem. The experimental evaluations demonstrate that BriskStream yields much higher throughput and better scalability than existing DSPSs on multi-core architectures when processing different types of workloads.Comment: To appear in SIGMOD'1

    Some results on tries with adaptive branching

    Get PDF
    AbstractWe study a modification of digital trees (or tries) with adaptive multi-digit branching. Such tries can dynamically adjust degrees of their nodes by choosing the number of digits to be processed per lookup. While we do not specify any particular method for selecting the degrees of nodes, we assume that such selection can be accomplished by examining the number of strings remaining in each sub-tree, and/or estimating parameters of the input distribution. We call this class of digital trees adaptive multi-digit tries (or AMD-tries) and provide a preliminary analysis of their expected behavior in a memoryless model. We establish the following results: (1) there exist AMD-tries attaining a constant expected time of a successful search; (2) there exist AMD-tries consuming a linear (in the number of strings inserted) amount of space; (3) both constant search time and linear space usage can be attained if the (memoryless) source is symmetric. We accompany our analysis with a brief survey of several known types of adaptive trie structures, and show how our analysis extends (and/or complements) previous results

    On the Importance of Countergradients for the Development of Retinotopy: Insights from a Generalised Gierer Model

    Get PDF
    During the development of the topographic map from vertebrate retina to superior colliculus (SC), EphA receptors are expressed in a gradient along the nasotemporal retinal axis. Their ligands, ephrin-As, are expressed in a gradient along the rostrocaudal axis of the SC. Countergradients of ephrin-As in the retina and EphAs in the SC are also expressed. Disruption of any of these gradients leads to mapping errors. Gierer's (1981) model, which uses well-matched pairs of gradients and countergradients to establish the mapping, can account for the formation of wild type maps, but not the double maps found in EphA knock-in experiments. I show that these maps can be explained by models, such as Gierer's (1983), which have gradients and no countergradients, together with a powerful compensatory mechanism that helps to distribute connections evenly over the target region. However, this type of model cannot explain mapping errors found when the countergradients are knocked out partially. I examine the relative importance of countergradients as against compensatory mechanisms by generalising Gierer's (1983) model so that the strength of compensation is adjustable. Either matching gradients and countergradients alone or poorly matching gradients and countergradients together with a strong compensatory mechanism are sufficient to establish an ordered mapping. With a weaker compensatory mechanism, gradients without countergradients lead to a poorer map, but the addition of countergradients improves the mapping. This model produces the double maps in simulated EphA knock-in experiments and a map consistent with the Math5 knock-out phenotype. Simulations of a set of phenotypes from the literature substantiate the finding that countergradients and compensation can be traded off against each other to give similar maps. I conclude that a successful model of retinotopy should contain countergradients and some form of compensation mechanism, but not in the strong form put forward by Gierer

    Fundamental Fields of: Post-Schumpeterian Evolutionary Economics

    Get PDF
    Although the branch of economics that deals with economic evolution has become established during the last couple of decades, its aims and potentials can most easily be understood on the background of the work of early pioneers. Joseph A. Schumpeter’s contribution not only analysed capitalist economic evolution as a process of the innovative renewal of business routines. He also explored the idea that the development of economics requires coordinated efforts within the “fundamental fields” of theory, history, statistics, and economic sociology. The paper applies this idea in an analysis of the development of modern evolutionary economics. The focus is on the characteristics and interdependencies of evolutionary history, evolutionary theory, and evolutionary statistics.Evolutionary economics; fundamental fields; Joseph A. Schumpeter

    Benchmarking for Bayesian Reinforcement Learning

    Full text link
    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but even though a few toy examples exist in the literature, there are still no extensive or rigorous benchmarks to compare them. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed.Comment: 37 page
    corecore