948 research outputs found

    Dataflow Analysis for Multiprocessor Systems with Non-Starvation-Free Schedulers

    Get PDF
    Dataflow analysis techniques are suitable for the temporal analysis of real-time stream processing applications. However, the applicability of these models is currently limited to systems with starvation-free schedulers, such as Time-Division Multiplexing (TDM) schedulers. Removal of this limitation would broaden the application domain of dataflow analysis techniques significantly. In this paper we present a temporal analysis technique for Homogeneous Synchronous Dataflow (HSDF) graphs, that is also applicable for systems with non-starvation-free schedulers. Unlike existing dataflow analysis techniques, the proposed analysis technique makes use of an enabling-jitter characterization and iterative fixed-point computation. The presented approach is applicable for arbitrary (cyclic) graph topologies. Buffer capacity constraints are taken into account during the analysis and sufficient buffer capacities can be determined afterwards. The approach presented in this paper is the first approach that considers non-starvation-free schedulers in combination with arbitrary HSDF graphs. The proposed dataflow analysis technique is implemented in a tool. This tool is used to evaluate the analysis technique using examples that illustrate some important differences with other temporal analysis methods. The case-study discusses how the method presented in this paper can be used to solve a problem with the inaccuracy of the temporal analysis results of a real-time stream processing system. This stream processing system consists of an FM receiver together with a DAB receiver application which both share a Digital Signal Processor (DSP)

    Static Analysis and Transformation of Dataflow Multimedia Applications

    Get PDF
    An approach for merging statically schedulable subr egions in dataflow models is pr esented. The approach combines abstr act int erpr etation, loop analysis, and static scheduling of cyclo-static dataflow networ ks. The approach has been implemented in a Java-based tool that per forms automatic classification of dataflow act or s, generat ion of stat ic schedules using constr aint programming, and automatic merging of the finegrained act or s in the subnetwor k into a single, larger -grained actor . The approach is applied to an MPEG-4 SP video decoder implemented in the dataflow act or s language CAL

    Dynamic Control Flow in Large-Scale Machine Learning

    Full text link
    Many recent machine learning models rely on fine-grained dynamic control flow for training and inference. In particular, models based on recurrent neural networks and on reinforcement learning depend on recurrence relations, data-dependent conditional execution, and other features that call for dynamic control flow. These applications benefit from the ability to make rapid control-flow decisions across a set of computing devices in a distributed system. For performance, scalability, and expressiveness, a machine learning system must support dynamic control flow in distributed and heterogeneous environments. This paper presents a programming model for distributed machine learning that supports dynamic control flow. We describe the design of the programming model, and its implementation in TensorFlow, a distributed machine learning system. Our approach extends the use of dataflow graphs to represent machine learning models, offering several distinctive features. First, the branches of conditionals and bodies of loops can be partitioned across many machines to run on a set of heterogeneous devices, including CPUs, GPUs, and custom ASICs. Second, programs written in our model support automatic differentiation and distributed gradient computations, which are necessary for training machine learning models that use control flow. Third, our choice of non-strict semantics enables multiple loop iterations to execute in parallel across machines, and to overlap compute and I/O operations. We have done our work in the context of TensorFlow, and it has been used extensively in research and production. We evaluate it using several real-world applications, and demonstrate its performance and scalability.Comment: Appeared in EuroSys 2018. 14 pages, 16 figure

    Worst-case temporal analysis of real-time dynamic streaming applications

    Get PDF

    Transformations of High-Level Synthesis Codes for High-Performance Computing

    Full text link
    Specialized hardware architectures promise a major step in performance and energy efficiency over the traditional load/store devices currently employed in large scale computing systems. The adoption of high-level synthesis (HLS) from languages such as C/C++ and OpenCL has greatly increased programmer productivity when designing for such platforms. While this has enabled a wider audience to target specialized hardware, the optimization principles known from traditional software design are no longer sufficient to implement high-performance codes. Fast and efficient codes for reconfigurable platforms are thus still challenging to design. To alleviate this, we present a set of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications. Our work provides a toolbox for developers, where we systematically identify classes of transformations, the characteristics of their effect on the HLS code and the resulting hardware (e.g., increases data reuse or resource consumption), and the objectives that each transformation can target (e.g., resolve interface contention, or increase parallelism). We show how these can be used to efficiently exploit pipelining, on-chip distributed fast memory, and on-chip streaming dataflow, allowing for massively parallel architectures. To quantify the effect of our transformations, we use them to optimize a set of throughput-oriented FPGA kernels, demonstrating that our enhancements are sufficient to scale up parallelism within the hardware constraints. With the transformations covered, we hope to establish a common framework for performance engineers, compiler developers, and hardware developers, to tap into the performance potential offered by specialized hardware architectures using HLS
    • …
    corecore