20 research outputs found

    New benchmarking methodology and programming model for big data processing

    Get PDF
    Big data processing is becoming a reality in numerous real-world applications. With the emergence of new data intensive technologies and increasing amounts of data, new computing concepts are needed. The integration of big data producing technologies, such as wireless sensor networks, Internet of Things, and cloud computing, into cyber-physical systems is reducing the available time to find the appropriate solutions. This paper presents one possible solution for the coming exascale big data processing: a data flow computing concept. The performance of data flow systems that are processing big data should not be measured with the measures defined for the prevailing control flow systems. A new benchmarking methodology is proposed, which integrates the performance issues of speed, area, and power needed to execute the task. The computer ranking would look different if the new benchmarking methodologies were used; data flow systems would outperform control flow systems. This statement is backed by the recent results gained from implementations of specialized algorithms and applications in data flow systems. They show considerable factors of speedup, space savings, and power reductions regarding the implementations of the same in control flow computers. In our view, the next step of data flow computing development should be a move from specialized to more general algorithms and applications.Peer ReviewedPostprint (published version

    The VINEYARD Approach: Versatile, Integrated, Accelerator-Based, Heterogeneous Data Centres.

    Get PDF
    Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application

    EXPERIMENTAL COMPARISON OF MATRIX ALGORITHMS FOR DATAFLOW COMPUTER ARCHITECTURE

    Get PDF
    In this paper we draw our attention to several algorithms for the dataflow computer paradigm, where the dataflow computation is used to augment the classical control-flow computation and, hence, strives to obtain an accelerated algorithm. Our main goal is to experimentally explore various dataflow techniques and features, which enable such an acceleration. Our focus is to resolve one of the most important challenges when designing a dataflow algorithm, which is to determine the best possible data choreography in the given context. In order to mitigate this challenge, we systematically enumerate and present possible techniques of various data choreographies. In particular, we focus our interest on the algorithms that use matrices and vectors as the underlaying data structure. We begin with simple algorithms such as matrix and vector multiplication, evaluation of polynomials as well as more advanced ones such as the simplex algorithm for solving linear programs. To evaluate the algorithms we compare their running-times as well as the dataflow resource consumption

    Accelerating the execution of time consuming software applications by configuring special hardware during the program execution on multiprocessor computers

    Get PDF
    За разлику од рачунара који се заснивају на контроли тока (енг. control-flow), чији су процесори способни за обављање свих инструкција дефинисаних архитектуром рачунара, а од којих сваки у једном тренутку обавља највише неколико инструкција, код рачунара заснованих на протоку података се хардвер конфигурише тако да се просторно распореде компоненте од којих је свака у стању да изврши само инструкцију за коју је предвиђена. Извршавање се своди на проток података кроз такав хардвер. Главне одлике овакве архитектуре рачунара су већа проточност података и смањена потрошња електричне енергије. Иако хардверске архитектуре рачунара засноване на протоку података постоје деценијама, технологија је тек недавно омогућила њихово равноправно коришћење са рачунарима заснованим на контроли тока, чиме проблем распоређивања послова између хардвера заснованог на протоку података и конвенционалних процесора све више добија на значају. Неке од временски захтевних апликација већи део времена извршавања проводе у цикличном понављању истих операција. Уколико су те итерације међусобно независне, или се могу довести у такав облик, онда је њихово извршавање погодно обавити употребом реконфигурабилног хардвера и парадигме засноване на протоку података. Ова теза описује постојеће метеде и предлаже нове за прављање распореда извршавања послова на оваквим архитектурама рачунара у циљу побољшања перформанси, при чему су само неке од апликација погодне за убрзавање коришћењем реконфигурабилног хардвера и парадигме засноване на протоку података. Предлажу се и временско и просторно дељење реконфигурабилног хардвера од стране конвенционалних процесора...In contrast to control-flow computer architectures, whose processors are capable of executing all instructions defined by the architecture, while each processor executes only up to few instructions simultaneously, hardware dataflow architectures are based on configuring hardware by spreading components capable of executing one instruction each over the surface. Computation is based on dataflow through the hardware. Main characteristics of this architecture are higher data throughput and reduced power consumption. Some of the computation demanding applications spend most of the execution time in iterating over the same set of instructions. Although hardware dataflow architectures exist for decades, due to the technology limitations, they have became valuable for executing such applications only recently. Therefore, the problem of scheduling jobs on dataflow hardware and conventional processors becomes increasingly important. Some of the computation demanding applications spend most of the execution time in executing for loops. If iterations are mutually independent, or if they can be transformed in such a form, then these applications are suitable for executing on dataflow hardware. This thesis presents available methods for creating schedules for this kind of architectures in order to reduce total execution times, and proposes new ones. Sharing the dataflow hardware in both time and space is proposed. Scheduling jobs on this architecture belongs to the NP problem class and scheduling time is considered as an overhead, so the algorithms use heuristics and search possible combinations of jobs only up to appropriate depth. Results confirm that this architecture can reduce total execution time and reveal the conditions under which the acceleration is possible..

    An empirical evaluation of High-Level Synthesis languages and tools for database acceleration

    Get PDF
    High Level Synthesis (HLS) languages and tools are emerging as the most promising technique to make FPGAs more accessible to software developers. Nevertheless, picking the most suitable HLS for a certain class of algorithms depends on requirements such as area and throughput, as well as on programmer experience. In this paper, we explore the different trade-offs present when using a representative set of HLS tools in the context of Database Management Systems (DBMS) acceleration. More specifically, we conduct an empirical analysis of four representative frameworks (Bluespec SystemVerilog, Altera OpenCL, LegUp and Chisel) that we utilize to accelerate commonly-used database algorithms such as sorting, the median operator, and hash joins. Through our implementation experience and empirical results for database acceleration, we conclude that the selection of the most suitable HLS depends on a set of orthogonal characteristics, which we highlight for each HLS framework.Peer ReviewedPostprint (author’s final draft

    HLS-ENABLEDDYNAMIC STREAM PROCESSING

    Get PDF

    Methodology for complex dataflow application development

    Get PDF
    This thesis addresses problems inherent to the development of complex applications for reconfig- urable systems. Many projects fail to complete or take much longer than originally estimated by relying on traditional iterative software development processes typically used with conventional computers. Even though designer productivity can be increased by abstract programming and execution models, e.g., dataflow, development methodologies considering the specific properties of reconfigurable systems do not exist. The first contribution of this thesis is a design methodology to facilitate systematic develop- ment of complex applications using reconfigurable hardware in the context of High-Performance Computing (HPC). The proposed methodology is built upon a careful analysis of the original application, a software model of the intended hardware system, an analytical prediction of performance and on-chip area usage, and an iterative architectural refinement to resolve identi- fied bottlenecks before writing a single line of code targeting the reconfigurable hardware. It is successfully validated using two real applications and both achieve state-of-the-art performance. The second contribution extends this methodology to provide portability between devices in two steps. First, additional tool support for contemporary multi-die Field-Programmable Gate Arrays (FPGAs) is developed. An algorithm to automatically map logical memories to hetero- geneous physical memories with special attention to die boundaries is proposed. As a result, only the proposed algorithm managed to successfully place and route all designs used in the evaluation while the second-best algorithm failed on one third of all large applications. Second, best practices for performance portability between different FPGA devices are collected and evaluated on a financial use case, showing efficient resource usage on five different platforms. The third contribution applies the extended methodology to a real, highly demanding emerging application from the radiotherapy domain. A Monte-Carlo based simulation of dose accumu- lation in human tissue is accelerated using the proposed methodology to meet the real time requirements of adaptive radiotherapy.Open Acces

    Fast sampling from Wiener posteriors for image data with dataflow engines

    Get PDF
    We use Dataflow Engines (DFE) to construct an efficient Wiener filter of noisy and incomplete image data, and to quickly draw probabilistic samples of the compatible true underlying images from the Wiener posterior. Dataflow computing is a powerful approach using reconfigurable hardware, which can be deeply pipelined and is intrinsically parallel. The unique Wiener-filtered image is the minimum-variance linear estimate of the true image (if the signal and noise covariances are known) and the most probable true image (if the signal and noise are Gaussian distributed). However, many images are compatible with the data with different probabilities, given by the analytic posterior probability distribution referred to as the Wiener posterior. The DFE code also draws large numbers of samples of true images from this posterior, which allows for further statistical analysis. Naive computation of the Wiener-filtered image is impractical for large datasets, as it scales as , where is the number of pixels. We use a messenger field algorithm, which is well suited to a DFE implementation, to draw samples from the Wiener posterior, that is, with the correct probability we draw samples of noiseless images that are compatible with the observed noisy image. The Wiener-filtered image can be obtained by a trivial modification of the algorithm. We demonstrate a lower bound on the speed-up, from drawing samples of a image, of 11.3 ± 0.8 with 8 DFEs in a 1U MPC-X box when compared with a 1U server presenting 32 CPU threads. We also discuss a potential application in astronomy, to provide better dark matter maps and improved determination of the parameters of the Universe

    Reconfigurable-Hardware Accelerated Stream Aggregation

    Get PDF
    High throughput and low latency stream aggregation is essential for many applications that analyze massive volumes of data in real-time. Incoming data need to be stored in a single sliding-window before processing, in cases where incremental aggregations are wasteful or not possible at all. However, storing all incoming values in a single-window puts tremendous pressure on the memory bandwidth and capacity. GPU and CPU memory management is inefficient for this task as it introduces unnecessary data movement that wastes bandwidth. FPGAs can make more efficient use of their memory but existing approaches employ only on-chip memory and therefore, can only support small problem sizes (i.e. small sliding windows and number of keys) due to the limited capacity. This thesis addresses the above limitations of stream processing systems by proposing techniques for accelerating single sliding-window stream aggregation using FPGAs to achieve line-rate processing throughput and ultra low latency. It does so first by building accelerators using FPGAs and second, by alleviating the memory pressure posed by single-window stream aggregation. The initial part of this thesis presents the accelerators for both windowing policies, namely, tuple- and time-\ua0based, using Maxeler\u27s DataFlow Engines\ua0(DFEs) which have a direct feed of incoming data from the network as well as direct access to off-chip DRAM. Compared to state-of-the-art stream processing software system, the DFEs offer 1-2 orders of magnitude higher processing throughput and 4 orders of magnitude lower latency. The later part of this thesis focuses on alleviating the memory pressure due to the various steps in single-window stream aggregation. Updating the window with new incoming values and reading it to feed the aggregation functions are the two primary steps in stream aggregation. The high on-chip SRAM bandwidth enables line-rate processing, but only for small problem sizes due to the limited capacity. The larger off-chip DRAM size supports larger problems, but falls short on performance due to lower bandwidth. In order to bridge this gap, this thesis introduces a specialized memory hierarchy for stream aggregation. It employs Multi-Level Queues (MLQs) spanning across multiple memory levels with different characteristics to offer both high bandwidth and capacity. In doing so, larger stream aggregation problems can be supported at line-rate performance, outperforming existing competing solutions. Compared to designs with only on-chip memory, our approach supports 4 orders of magnitude larger problems. Compared to designs that use only DRAM, our design achieves up to 8x higher throughput. Finally, this thesis aims to alleviate the memory pressure due to the window-aggregation step. Although window-updates can be supported efficiently using MLQs, frequent window-aggregations remain a performance bottleneck. This thesis addresses this problem by introducing StreamZip, a dataflow stream aggregation engine that is able to compress the sliding-windows. StreamZip deals with a number of data and control dependency challenges to integrate a compressor in the stream aggregation pipeline and alleviate the memory pressure posed by frequent aggregations. In doing so, StreamZip offers higher throughput as well as larger effective window capacity to support larger problems. StreamZip supports diverse compression algorithms offering both lossless and lossy compression to fixed- as well as floating- point numbers. Compared to designs using MLQs, StreamZip lossless and lossy designs achieve up to 7.5x and 22x higher throughput, while improving the effective memory capacity by up to 5x and 23x, respectively
    corecore