15 research outputs found

    New benchmarking methodology and programming model for big data processing

    Get PDF
    Big data processing is becoming a reality in numerous real-world applications. With the emergence of new data intensive technologies and increasing amounts of data, new computing concepts are needed. The integration of big data producing technologies, such as wireless sensor networks, Internet of Things, and cloud computing, into cyber-physical systems is reducing the available time to find the appropriate solutions. This paper presents one possible solution for the coming exascale big data processing: a data flow computing concept. The performance of data flow systems that are processing big data should not be measured with the measures defined for the prevailing control flow systems. A new benchmarking methodology is proposed, which integrates the performance issues of speed, area, and power needed to execute the task. The computer ranking would look different if the new benchmarking methodologies were used; data flow systems would outperform control flow systems. This statement is backed by the recent results gained from implementations of specialized algorithms and applications in data flow systems. They show considerable factors of speedup, space savings, and power reductions regarding the implementations of the same in control flow computers. In our view, the next step of data flow computing development should be a move from specialized to more general algorithms and applications.Peer ReviewedPostprint (published version

    Data parallel algorithm in finding 2-D site percolation backbones

    Get PDF
    Proceedings of: First International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2014). Porto (Portugal), August 27-28, 2014.A data parallel solution approach formulated with cellular automata is proposed with a potential to become a part of future sustainable computers. It offers extreme parallelism on data-flow principles. If a problem can be formulated with a local and iterative methodology, so that data cell results always depend on neighbouring data items only, the cellular automata could be an efficient solution framework. We have demonstrated experimentally, on a graph-theoretical problem, that the performance of the proposed methodology has a potential to be for two orders of magnitude faster from known solutions

    Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing

    Get PDF
    The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing

    An AppGallery for dataflow computing

    Get PDF

    ExaFlexHH: an exascale-ready, flexible multi-FPGA library for biologically plausible brain simulations

    Get PDF
    IntroductionIn-silico simulations are a powerful tool in modern neuroscience for enhancing our understanding of complex brain systems at various physiological levels. To model biologically realistic and detailed systems, an ideal simulation platform must possess: (1) high performance and performance scalability, (2) flexibility, and (3) ease of use for non-technical users. However, most existing platforms and libraries do not meet all three criteria, particularly for complex models such as the Hodgkin-Huxley (HH) model or for complex neuron-connectivity modeling such as gap junctions.MethodsThis work introduces ExaFlexHH, an exascale-ready, flexible library for simulating HH models on multi-FPGA platforms. Utilizing FPGA-based Data-Flow Engines (DFEs) and the dataflow programming paradigm, ExaFlexHH addresses all three requirements. The library is also parameterizable and compliant with NeuroML, a prominent brain-description language in computational neuroscience. We demonstrate the performance scalability of the platform by implementing a highly demanding extended-Hodgkin-Huxley (eHH) model of the Inferior Olive using ExaFlexHH.ResultsModel simulation results show linear scalability for unconnected networks and near-linear scalability for networks with complex synaptic plasticity, with a 1.99 × performance increase using two FPGAs compared to a single FPGA simulation, and 7.96 × when using eight FPGAs in a scalable ring topology. Notably, our results also reveal consistent performance efficiency in GFLOPS per watt, further facilitating exascale-ready computing speeds and pushing the boundaries of future brain-simulation platforms.DiscussionThe ExaFlexHH library shows superior resource efficiency, quantified in FLOPS per hardware resources, benchmarked against other competitive FPGA-based brain simulation implementations

    Particle-in-cell simulations Of highly collisional plasmas on the GPU in 1 and 2 dimensions

    Get PDF
    During 20th century few branches of science have proved themselves to be more industrially applicable than Plasma science and processing. Across a vast range of discharge types and regimes, and through industries spanning semiconductor manufacture, surface sterilisation, food packaging and medicinal treatment, industry continues to find new usefulness in this physical phenomenon well into 21st century. To better cater to this diverse motley of industries there is a need for more detailed and accurate understanding of plasma chemistry and kinetics, which drive the plasma processes central to manufacturing. Extensive efforts have been made to characterise plasma discharges numerically and mathematically leading to the development a number of different approaches. In our work we concentrate on the Particle-In-Cell (PIC) - Monte Carlo Collision (MCC) approach to plasma modelling. This method has for a long time been considered computationally prohibitive by its long run times and high computational resource expense. However, with modern advances in computing, particularly in the form of relatively cheap accelerator devices such as GPUs and co-processors, we have developed a massively parallel simulation in 1 and 2 dimensions to take advantage of this large increase in computing power. Furthermore, we have implemented some changes to the traditional PIC-MCC implementation to provide a more generalised simulation, with greater scalability and smooth transition between low and high (atmospheric) pressure discharge regimes. We also present some preliminary physical and computational benchmarks for our PIC-MCC implementation providing a strong case for validation of our results
    corecore