427 research outputs found

    Investigation of parallel programming on heterogeneous multiprocessors

    Get PDF
    Multi-core processors have become ordinary in modern commodity computers. Computationally intensive applications, like video processing, that previously only ran on specialized hardware, are now common on home computers. However, the demand for more computing power is ever-increasing, and with the introduction of high definition video, more performance is desired. As an alternative to having multiple identical processor cores, heterogeneous multiprocessors have cores with different capabilities. This allows tasks to be processed on simple cores with specialized functionality. The simplicity furthers low power consumption, small die usage, and low price. Dealing with heterogeneous cores increases the complexity of writing programs for the architecture. The reasons for this includes different capabilities of the cores, and some heterogeneous architectures do not have shared memory. Without shared memory, accessing main memory requires explicit transfers to local memory. In this thesis, we consider two architectures, the STI Cell/B.E. and Intel IXP2400, and evaluate parallelization strategies and performance for real-world problems. Our tests show promising throughput for some applications, and we propose a scheme for offloading computationally intensive parts of an existing application

    Cumulative index to NASA Tech Briefs, 1986-1990, volumes 10-14

    Get PDF
    Tech Briefs are short announcements of new technology derived from the R&D activities of the National Aeronautics and Space Administration. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This cumulative index of Tech Briefs contains abstracts and four indexes (subject, personal author, originating center, and Tech Brief number) and covers the period 1986 to 1990. The abstract section is organized by the following subject categories: electronic components and circuits, electronic systems, physical sciences, materials, computer programs, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences

    Algorithm/Architecture Co-Exploration of Visual Computing: Overview and Future Perspectives

    Get PDF
    Concurrently exploring both algorithmic and architectural optimizations is a new design paradigm. This survey paper addresses the latest research and future perspectives on the simultaneous development of video coding, processing, and computing algorithms with emerging platforms that have multiple cores and reconfigurable architecture. As the algorithms in forthcoming visual systems become increasingly complex, many applications must have different profiles with different levels of performance. Hence, with expectations that the visual experience in the future will become continuously better, it is critical that advanced platforms provide higher performance, better flexibility, and lower power consumption. To achieve these goals, algorithm and architecture co-design is significant for characterizing the algorithmic complexity used to optimize targeted architecture. This paper shows that seamless weaving of the development of previously autonomous visual computing algorithms and multicore or reconfigurable architectures will unavoidably become the leading trend in the future of video technology

    Distributed Computing with the Cell Broadband Engine

    Get PDF
    The rapid improvements in the availability of commodity high-performance components has resulted in a proliferation of networked devices, making scalable computing clusters the standard platform for many high-performance and large-scale applications. However, the process of parallelizing applications for such distributed environments is a challenging task, requiring explicit management of concurrency and data locality. While there exists many frameworks and platforms to assist with this process, like Google’s MapReduce, Microsoft’s Dryad and Azure, Yahoo’s Pig Latin programming language, and the Condor framework, they are usually targeted towards off-line batch processing of large quantities of data, contrary to real-time offloading of compute intensive tasks. Moreover, MapReduce, Dryad, and Pig Latin may not be suitable for all application domains, due to their inability to model branching and iterative algorithms. In this thesis, we present a design for a framework able to accelerate applications by offloading compute intensive tasks to a heterogeneous distributed environment, and provide a prototype implementation for the Cell Broadband Engine. We evaluate the framework performance and scalability, and propose several future enhancements to further increase performance. Our results show that compute intensive applications that allow for high numbers of concurrent jobs fits well to our framework, and shows good scalability

    Motion Detection in Low Resolution Grayscale Videos Using Fast Normalized Cross Correrelation on GP-GPU

    Get PDF
    Motion estimation (ME) has been widely used in many computer vision applications, such as object tracking, object detection, pattern recognition and video compression. The most popular block based similarity measures are the sum of absolute differences (SAD), the sum of squared differences (SSD) and the normalized cross correlation (NCC). Similarity measure obtained using NCC is more robust under varying illumination changes as compared to SAD and SSD. However NCC is computationally expensive and application of NCC using full or exhaustive search method further increases required computational time. Relatively efficient way of calculating the NCC is to pre-compute sum-tables to perform the normalization referred to as fast NCC (FCC). In this paper we propose real time implementation of full search FCC algorithm applied to gray scale videos using NVIDIA’s Compute Unified Device Architecture (CUDA). We present fine-grained optimization techniques for fully exploiting computational capacity of CUDA. Novel parallelization strategies adopted for extracting data parallelism substantially reduce computational time of exhaustive FCC. We show that by efficient utilization of global, shared and texture memories available on CUDA, we can obtain the speedup of the order of 10x as compared to the sequential implementation of FCC

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    Performance Analysis of Parallel Execution of H.264 Encoder on the Cell Processor

    Full text link

    Aeronautical engineering: A continuing bibliography with indexes (supplement 303)

    Get PDF
    This bibliography lists 211 reports, articles, and other documents introduced into the NASA scientific and technical information database. Subject coverage includes: design, construction, and testing of aircraft and aircraft engines; aircraft components, equipment, and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Efficient rendering of large 3-D and 4-D scalar fields

    Get PDF
    Rendering volumetric data, as a compute/communication intensive and highly parallel application, represents the characteristics of future workloads for desktop computers. Interactively rendering volumetric data has been a challenging problem due to its high computational and communication requirements. With the consistent trend toward high resolution data, it has remained a difficult problem despite the continuous increase in processing power, because of the increasing performance gap between computation and communication. On the other hand, the new multi-core architecture trend in computational units in PC, which can be characterized by parallelism and heterogeneity, provides both opportunities and challenges. While the new on-chip parallel architectures offer opportunities for extremely high performance, widespread use of those parallel processors requires extensive changes in previous algorithms to take advantage of the new architectures. In this dissertation, we develop new methods and techniques to support interactive rendering of large volumetric data. In particular, we present a novel method to layout data on disk for efficiently performing an out-of-core axis-aligned slicing of large multidimensional scalar fields. We also present a new method to efficiently build an out-of-core indexing structure for n-dimensional volumetric data. Then, we describe a streaming model for efficiently implementing volume ray casting on a heterogeneous compute resource environment. We describe how we implement the model on SONY/TOSHIBA/IBM Cell Broadband Engine and on NVIDIA CUDA architecture. Our results show that our out-of-core techniques significantly reduce the communication bandwidth requirements and that our streaming model very effectively makes use of the strengths of those heterogeneous parallel compute resource environment for volume rendering. In all cases, we achieve scalability and load balancing, while hiding memory latency
    • …
    corecore