1,342 research outputs found

    Improving Utility of GPU in Accelerating Industrial Applications with User-centred Automatic Code Translation

    Get PDF
    SMEs (Small and medium-sized enterprises), particularly those whose business is focused on developing innovative produces, are limited by a major bottleneck on the speed of computation in many applications. The recent developments in GPUs have been the marked increase in their versatility in many computational areas. But due to the lack of specialist GPU (Graphics processing units) programming skills, the explosion of GPU power has not been fully utilized in general SME applications by inexperienced users. Also, existing automatic CPU-to-GPU code translators are mainly designed for research purposes with poor user interface design and hard-to-use. Little attentions have been paid to the applicability, usability and learnability of these tools for normal users. In this paper, we present an online automated CPU-to-GPU source translation system, (GPSME) for inexperienced users to utilize GPU capability in accelerating general SME applications. This system designs and implements a directive programming model with new kernel generation scheme and memory management hierarchy to optimize its performance. A web-service based interface is designed for inexperienced users to easily and flexibly invoke the automatic resource translator. Our experiments with non-expert GPU users in 4 SMEs reflect that GPSME system can efficiently accelerate real-world applications with at least 4x and have a better applicability, usability and learnability than existing automatic CPU-to-GPU source translators

    Enhanced applicability of loop transformations

    Get PDF

    A flexible and versatile studio for synchronized multi-view video recording

    Get PDF
    In recent years, the convergence of Computer Vision and Computer Graphics has put forth new research areas that work on scene reconstruction from and analysis of multi-view video footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint in real-time from a set of real multi-view input video streams. The analysis of real-world scenes from multi-view video to extract motion information or reflection models is another field of research that greatly benefits from high-quality input data. Building a recording setup for multi-view video involves a great effort on the hardware as well as the software side. The amount of image data to be processed is huge, a decent lighting and camera setup is essential for a naturalistic scene appearance and robust background subtraction, and the computing infrastructure has to enable real-time processing of the recorded material. This paper describes the recording setup for multi-view video acquisition that enables the synchronized recording of dynamic scenes from multiple camera positions under controlled conditions. The requirements to the room and their implementation in the separate components of the studio are described in detail. The efficiency and flexibility of the room is demonstrated on the basis of the results that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical motion capture and a model-based free-viewpoint video system for human actors.

    Constructive Synthesis of Memory-Intensive Accelerators for FPGA From Nested Loop Kernels

    Get PDF

    Instruction-set architecture synthesis for VLIW processors

    Get PDF
    • …
    corecore