3 research outputs found

    High Performance Information Filtering on Many-core Processors

    Get PDF
    The increasing amount of information accessible to a user digitally makes search difficult, time consuming and unsatisfactory. This has led to the development of active information filtering (recommendation) systems that learn a user’s preference and filter out the most relevant information using sophisticated machine learning techniques. To be scalable and effective, such systems are currently deployed in cloud infrastructures consisting of general-purpose computers. The emergence of many-core processors as compute nodes in cloud infrastructures necessitates a revisit of the computational model, run-time, memory hierarchy and I/O pipelines to fully exploit available concurrency within these processors. This research proposes algorithms & architectures to enhance the performance of content-based (CB) and collaborative information filtering (CF) on many-core processors. To validate these methods, we use Nvidia’s Tesla, Fermi and Kepler GPUs and Intel’s experimental single chip cloud computer (SCC) as the target platforms. We observe that ~290x speedup and up to 97% energy savings over conventional sequential approaches. Finally, we propose and validate a novel reconfigurable SoC architecture which combines the best features of GPUs & SCC. This has been validated to show ~98K speedup over SCC and ~15K speedup over GPU

    Hardware Architecture for Semantic Comparison

    Get PDF
    Semantic Routed Networks provide a superior infrastructure for complex search engines. In a Semantic Routed Network (SRN), the routers are the critical component and they perform semantic comparison as their key computation. As the amount of information available on the Internet grows, the speed and efficiency with which information can be retrieved to the user becomes important. Most current search engines scale to meet the growing demand by deploying large data centers with general purpose computers that consume many megawatts of power. Reducing the power consumption of these data centers while providing better performance, will help reduce the costs of operation significantly. Performing operations in parallel is a key optimization step for better performance on general purpose CPUs. Current techniques for parallelization include architectures that are multi-core and have multiple thread handling capabilities. These coarse grained approaches have considerable resource management overhead and provide only sub-linear speedup. This dissertation proposes techniques towards a highly parallel, power efficient architecture that performs semantic comparisons as its core activity. Hardware-centric parallel algorithms have been developed to populate the required data structures followed by computation of semantic similarity. The performance of the proposed design is further enhanced using a pipelined architecture. The proposed algorithms were also implemented on two contemporary platforms such as the Nvidia CUDA and an FPGA for performance comparison. In order to validate the designs, a semantic benchmark was also been created. It has been shown that a dedicated semantic comparator delivers significantly better performance compared to other platforms. Results show that the proposed hardware semantic comparison architecture delivers a speedup performance of up to 10^5 while reducing power consumption by 80% compared to traditional computing platforms. Future research directions including better power optimization, architecting the complete semantic router and using the semantic benchmark for SRN research are also discussed

    Parallel Processor Core for Semantic Search Engines

    No full text
    corecore