121 research outputs found

    Multipass communication systems for tiled processor architectures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 191-202).Multipass communication systems utilize multiple sets of parallel baseband receiver functions to balance communication data rates and available computation capabilities. This is achieved by spatially pipelining baseband functions across parallel resources to perform multiple processing passes on the same set of received values, thus allowing the system to simultaneously convey multiple sequences of data using a single wireless link. The use of multiple passes mitigates the effects of data rate on receiver processing bottlenecks, making the use of general-purpose processing elements for high data rate communication functions viable. The flexibility of general-purpose processing, in turn, allows the receiver composition to trade-off resource usage and required processing rate. For instance, a communication system could be distributed across 2 passes using 2x the overall area, but reducing the data rate for each pass and the resultant overall required processing rate, and hence clock speed, by 1/2. Lowering the clock speed can also be leveraged to reduce power through voltage scaling and/or the use of higher Vt devices. The characteristics of general-purpose parallel processors for communications processing are explored, as well as the applicability of specific parallel designs to communications processing.(Cont.) In particular, an in depth look is taken of the Raw processor's tiled architecture as a general-purpose parallel processor particularly well suited to portable communications processing. An example of a multipass system, based on the 802.11a baseband, implemented on the Raw processor along with the accompanying hardware implementation is presented as both a proof-of-concept, as well as a means to explore some of the advantages and trade-offs of such a system. A bit-error rate study is presented which shows this multipass system to be within a small fraction of dB of the performance of an equivalent data rate single pass system, thus demonstrating the viability of the multipass algorithm. In addition, the capability of tiled processors to maximize processing capabilities at the system block level, as well as the system architectural level, is shown. Parallel implementations of two processing intensive functions: the FFT and the Viterbi decoder are shown. A parallelized assembly language FFT utilizing 16 tiles is shown to have a 1,000x improvement , and a parallelized 48-tile assembly language Viterbi decoder is shown to have a 10, 000x improvement over corresponding serial C implementations.by Nathan Robert Shnidman.Ph.D

    The Effect of Instruction Padding on SFI Overhead

    Full text link
    Software-based fault isolation (SFI) is a technique to isolate a potentially faulty or malicious software module from the rest of a system using instruction-level rewriting. SFI implementations on CISC architectures, including Google Native Client, use instruction padding to enforce an address layout invariant and restrict control flow. However this padding decreases code density and imposes runtime overhead. We analyze this overhead, and show that it can be reduced by allowing some execution of overlapping instructions, as long as those overlapping instructions are still safe according to the original per-instruction policy. We implemented this change for both 32-bit and 64-bit x86 versions of Native Client, and analyzed why the performance benefit is higher on 32-bit. The optimization leads to a consistent decrease in the number of instructions executed and savings averaging 8.6% in execution time (over compatible benchmarks from SPECint2006) for x86-32. We describe how to modify the validation algorithm to check the more permissive policy, and extend a machine-checked Coq proof to confirm that the system's security is preserved.Comment: NDSS Workshop on Binary Analysis Research, February 201

    Energy Efficient Load Latency Tolerance: Single-Thread Performance for the Multi-Core Era

    Get PDF
    Around 2003, newly activated power constraints caused single-thread performance growth to slow dramatically. The multi-core era was born with an emphasis on explicitly parallel software. Continuing to grow single-thread performance is still important in the multi-core context, but it must be done in an energy efficient way. One significant impediment to performance growth in both out-of-order and in-order processors is the long latency of last-level cache misses. Prior work introduced the idea of load latency tolerance---the ability to dynamically remove miss-dependent instructions from critical execution structures, continue execution under the miss, and re-execute miss-dependent instructions after the miss returns. However, previously proposed designs were unable to improve performance in an energy-efficient way---they introduced too many new large, complex structures and re-executed too many instructions. This dissertation describes a new load latency tolerant design that is both energy-efficient, and applicable to both in-order and out-of-order cores. Key novel features include formulation of slice re-execution as an alternative use of multi-threading support, efficient schemes for register and memory state management, and new pruning mechanisms for drastically reducing load latency tolerance\u27s dynamic execution overheads. Area analysis shows that energy-efficient load latency tolerance increases the footprint of an out-of-order core by a few percent, while cycle-level simulation shows that it significantly improves the performance of memory-bound programs. Energy-efficient load latency tolerance is more energy-efficient than---and synergistic with---existing performance technique like dynamic voltage and frequency scaling (DVFS)

    Queueing in the mist: Buffering and scheduling with limited knowledge

    Get PDF
    Scheduling and managing queues with bounded buffers are among the most fundamental problems in computer networking. Traditionally, it is often assumed that all the properties of each packet are known immediately upon arrival. However, as traffic becomes increasingly heterogeneous and complex, such assumptions are in many cases invalid. In particular, in various scenarios information about packet characteristics becomes available only after the packet has undergone some initial processing. In this work, we study the problem of managing queues with limited knowledge. We start by showing lower bounds on the competitive ratio of any algorithm in such settings. Next, we use the insight obtained from these bounds to identify several algorithmic concepts appropriate for the problem, and use these guidelines to design a concrete algorithmic framework. We analyze the performance of our proposed algorithm, and further show how it can be implemented in various settings, which differ by the type and nature of the unknown information. We further validate our results and algorithmic approach by a simulation study that provides further insights as to our algorithmic design principles in face of limited knowledge

    DCT Implementation on GPU

    Get PDF
    There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform

    Technical Design Report EuroGammaS proposal for the ELI-NP Gamma beam System

    Full text link
    The machine described in this document is an advanced Source of up to 20 MeV Gamma Rays based on Compton back-scattering, i.e. collision of an intense high power laser beam and a high brightness electron beam with maximum kinetic energy of about 720 MeV. Fully equipped with collimation and characterization systems, in order to generate, form and fully measure the physical characteristics of the produced Gamma Ray beam. The quality, i.e. phase space density, of the two colliding beams will be such that the emitted Gamma ray beam is characterized by energy tunability, spectral density, bandwidth, polarization, divergence and brilliance compatible with the requested performances of the ELI-NP user facility, to be built in Romania as the Nuclear Physics oriented Pillar of the European Extreme Light Infrastructure. This document illustrates the Technical Design finally produced by the EuroGammaS Collaboration, after a thorough investigation of the machine expected performances within the constraints imposed by the ELI-NP tender for the Gamma Beam System (ELI-NP-GBS), in terms of available budget, deadlines for machine completion and performance achievement, compatibility with lay-out and characteristics of the planned civil engineering

    Development of computational methods for the analysis of proteomics and next generation sequencing data

    Get PDF
    • …
    corecore