1,006 research outputs found

    A key-based adaptive transactional memory executor

    Get PDF
    Software transactional memory systems enable a programmer to easily write concurrent data structures such as lists, trees, hashtables, and graphs, where nonconflicting operations proceed in parallel. Many of these structures take the abstract form of a dictionary, in which each transaction is associated with a search key. By regrouping transactions based on their keys, one may improve locality and reduce conflicts among parallel transactions. In this paper, we present an executor that partitions transactions among available processors. Our keybased adaptive partitioning monitors incoming transactions, estimates the probability distribution of their keys, and adaptively determines the (usually nonuniform) partitions. By comparing the adaptive partitioning with uniform partitioning and round-robin keyless partitioning on a 16-processor SunFire 6800 machine, we demonstrate that key-based adaptive partitioning significantly improves the throughput of finegrained parallel operations on concurrent data structures

    On the Complexity of Mining Itemsets from the Crowd Using Taxonomies

    Full text link
    We study the problem of frequent itemset mining in domains where data is not recorded in a conventional database but only exists in human knowledge. We provide examples of such scenarios, and present a crowdsourcing model for them. The model uses the crowd as an oracle to find out whether an itemset is frequent or not, and relies on a known taxonomy of the item domain to guide the search for frequent itemsets. In the spirit of data mining with oracles, we analyze the complexity of this problem in terms of (i) crowd complexity, that measures the number of crowd questions required to identify the frequent itemsets; and (ii) computational complexity, that measures the computational effort required to choose the questions. We provide lower and upper complexity bounds in terms of the size and structure of the input taxonomy, as well as the size of a concise description of the output itemsets. We also provide constructive algorithms that achieve the upper bounds, and consider more efficient variants for practical situations.Comment: 18 pages, 2 figures. To be published to ICDT'13. Added missing acknowledgemen

    Design and FPGA Implementation of High Speed DWT-IDWT Architecture with Pipelined SPIHT Architecture for Image Compression

    Get PDF
    Image compression demands high speed architectures for transformation and encoding process Medical image compression demands lossless compression schemes and faster architectures A trade-off between speed and area decides the complexity of image compression algorithms In this work a high speed DWT architecture and pipelined SPIHT architecture is designed modeled and implemented on FPGA platform DWT computation is performed using matrix multiplication operation and is implemented on Virtex-5 FPGA that consumes less than 1 of the hardware resource The SPIHT algorithm that is performed using pipelined architecture and hence achieves higher throughput and latency The SPIHT algorithm operates at a frequency of 260 MHz and occupies area less than 15 of the resources The architecture designed is suitable for high speed image compression application

    Ramified rectilinear polygons: coordinatization by dendrons

    Full text link
    Simple rectilinear polygons (i.e. rectilinear polygons without holes or cutpoints) can be regarded as finite rectangular cell complexes coordinatized by two finite dendrons. The intrinsic l1l_1-metric is thus inherited from the product of the two finite dendrons via an isometric embedding. The rectangular cell complexes that share this same embedding property are called ramified rectilinear polygons. The links of vertices in these cell complexes may be arbitrary bipartite graphs, in contrast to simple rectilinear polygons where the links of points are either 4-cycles or paths of length at most 3. Ramified rectilinear polygons are particular instances of rectangular complexes obtained from cube-free median graphs, or equivalently simply connected rectangular complexes with triangle-free links. The underlying graphs of finite ramified rectilinear polygons can be recognized among graphs in linear time by a Lexicographic Breadth-First-Search. Whereas the symmetry of a simple rectilinear polygon is very restricted (with automorphism group being a subgroup of the dihedral group D4D_4), ramified rectilinear polygons are universal: every finite group is the automorphism group of some ramified rectilinear polygon.Comment: 27 pages, 6 figure

    Developing the big ideas of number

    Get PDF
    The mathematical content knowledge (MCK) and pedagogical content knowledge (PCK) of primary and elementary teachers at all levels of experience is under scrutiny. This article suggests that content knowledge and the way in which it is linked to effective pedagogies would be greatly enhanced by viewing mathematical content from the perspective of the ‘big ideas’ of mathematics, especially of number. This would enable teachers to make use of the many connections and links within and between such ‘big ideas’ and to make them explicit to children. Many teachers view the content they have to teach in terms of what curriculum documents define as being applicable to the particular year level being taught. This article suggests that a broader view of content is needed as well as a greater awareness of how concepts are built in preceding and succeeding year levels. A ‘big ideas’ focus would also better enable teachers to deal with the demands of what are perceived to be crowded mathematics curricula. The article investigates four ‘big ideas’ of number – trusting the count, place value, multiplicative thinking, and multiplicative partitioning – and examines the ‘micro-content’ that contributes to their development

    Reconfigurable-Hardware Accelerated Stream Aggregation

    Get PDF
    High throughput and low latency stream aggregation is essential for many applications that analyze massive volumes of data in real-time. Incoming data need to be stored in a single sliding-window before processing, in cases where incremental aggregations are wasteful or not possible at all. However, storing all incoming values in a single-window puts tremendous pressure on the memory bandwidth and capacity. GPU and CPU memory management is inefficient for this task as it introduces unnecessary data movement that wastes bandwidth. FPGAs can make more efficient use of their memory but existing approaches employ only on-chip memory and therefore, can only support small problem sizes (i.e. small sliding windows and number of keys) due to the limited capacity. This thesis addresses the above limitations of stream processing systems by proposing techniques for accelerating single sliding-window stream aggregation using FPGAs to achieve line-rate processing throughput and ultra low latency. It does so first by building accelerators using FPGAs and second, by alleviating the memory pressure posed by single-window stream aggregation. The initial part of this thesis presents the accelerators for both windowing policies, namely, tuple- and time-\ua0based, using Maxeler\u27s DataFlow Engines\ua0(DFEs) which have a direct feed of incoming data from the network as well as direct access to off-chip DRAM. Compared to state-of-the-art stream processing software system, the DFEs offer 1-2 orders of magnitude higher processing throughput and 4 orders of magnitude lower latency. The later part of this thesis focuses on alleviating the memory pressure due to the various steps in single-window stream aggregation. Updating the window with new incoming values and reading it to feed the aggregation functions are the two primary steps in stream aggregation. The high on-chip SRAM bandwidth enables line-rate processing, but only for small problem sizes due to the limited capacity. The larger off-chip DRAM size supports larger problems, but falls short on performance due to lower bandwidth. In order to bridge this gap, this thesis introduces a specialized memory hierarchy for stream aggregation. It employs Multi-Level Queues (MLQs) spanning across multiple memory levels with different characteristics to offer both high bandwidth and capacity. In doing so, larger stream aggregation problems can be supported at line-rate performance, outperforming existing competing solutions. Compared to designs with only on-chip memory, our approach supports 4 orders of magnitude larger problems. Compared to designs that use only DRAM, our design achieves up to 8x higher throughput. Finally, this thesis aims to alleviate the memory pressure due to the window-aggregation step. Although window-updates can be supported efficiently using MLQs, frequent window-aggregations remain a performance bottleneck. This thesis addresses this problem by introducing StreamZip, a dataflow stream aggregation engine that is able to compress the sliding-windows. StreamZip deals with a number of data and control dependency challenges to integrate a compressor in the stream aggregation pipeline and alleviate the memory pressure posed by frequent aggregations. In doing so, StreamZip offers higher throughput as well as larger effective window capacity to support larger problems. StreamZip supports diverse compression algorithms offering both lossless and lossy compression to fixed- as well as floating- point numbers. Compared to designs using MLQs, StreamZip lossless and lossy designs achieve up to 7.5x and 22x higher throughput, while improving the effective memory capacity by up to 5x and 23x, respectively

    Scalable Data Analysis on MapReduce-based Systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore