4 research outputs found

    Buffer allocation for dynamic real-time streaming applications running on a multi-processor without back-pressure

    No full text
    Buffer allocation for real-time streaming applications, modeled as dataflow graphs, minimizes the total memory consumption while reserving sufficient space for each data production without overwriting any live data and guaranteeing the satisfaction of real-time constraints. We focus on the problem of buffer allocation for systems without back-pressure. Since systems without back-pressure lack blocking behavior at the side of the producer, buffer allocation requires both best- and worst-case timing analysis. Moreover, the dynamic (data-dependent) behavior in these applications makes buffer allocation challenging from the best- and worst-case- timing analysis perspective. We argue that static dataflow cannot conveniently express the dynamic behavior of these applications, leading to overallocation of memory resources. Mode-controlled Dataflow (MCDF) is a restricted form of dynamic dataflow that allows mode switching at runtime and static analysis of real-time constraints. In this paper, we address the problem of buffer allocation for MCDF graphs scheduled on systems without back-pressure. We consider practically relevant applications that can be modeled in MCDF using recurrent-choice mode sequence that consists of the mode sequences of equal length; it provides tractable analysis. Our contribution is a buffer allocation algorithm that achieves up to 36% reduction in total memory consumption compared to the current state-of-the-art for an LTE and an LTE Advanced receiver use cases

    Buffer allocation for dynamic real-time streaming applications running on a multi-processor without back-pressure

    No full text
    Buffer allocation for real-time streaming applications, modeled as dataflow graphs, minimizes the total memory consumption while reserving sufficient space for each data production without overwriting any live data and guaranteeing the satisfaction of real-time constraints. We focus on the problem of buffer allocation for systems without back-pressure. Since systems without back-pressure lack blocking behavior at the side of the producer, buffer allocation requires both best- and worst-case timing analysis. Moreover, the dynamic (data-dependent) behavior in these applications makes buffer allocation challenging from the best- and worst-case- timing analysis perspective. We argue that static dataflow cannot conveniently express the dynamic behavior of these applications, leading to overallocation of memory resources. Mode-controlled Dataflow (MCDF) is a restricted form of dynamic dataflow that allows mode switching at runtime and static analysis of real-time constraints. In this paper, we address the problem of buffer allocation for MCDF graphs scheduled on systems without back-pressure. We consider practically relevant applications that can be modeled in MCDF using recurrent-choice mode sequence that consists of the mode sequences of equal length; it provides tractable analysis. Our contribution is a buffer allocation algorithm that achieves up to 36% reduction in total memory consumption compared to the current state-of-the-art for an LTE and an LTE Advanced receiver use cases

    Response modeling:model refinements for timing analysis of runtime scheduling in real-time streaming systems

    Get PDF

    Systematic Design Space Exploration of Dynamic Dataflow Programs for Multi-core Platforms

    Get PDF
    The limitations of clock frequency and power dissipation of deep sub-micron CMOS technology have led to the development of massively parallel computing platforms. They consist of dozens or hundreds of processing units and offer a high degree of parallelism. Taking advantage of that parallelism and transforming it into high program performances requires the usage of appropriate parallel programming models and paradigms. Currently, a common practice is to develop parallel applications using methods evolving directly from sequential programming models. However, they lack the abstractions to properly express the concurrency of the processes. An alternative approach is to implement dataflow applications, where the algorithms are described in terms of streams and operators thus their parallelism is directly exposed. Since algorithms are described in an abstract way, they can be easily ported to different types of platforms. Several dataflow models of computation (MoCs) have been formalized so far. They differ in terms of their expressiveness (ability to handle dynamic behavior) and complexity of analysis. So far, most of the research efforts have focused on the simpler cases of static dataflow MoCs, where many analyses are possible at compile-time and several optimization problems are greatly simplified. At the same time, for the most expressive and the most difficult to analyze dynamic dataflow (DDF), there is still a dearth of tools supporting a systematic and automated analysis minimizing the programming efforts of the designer. The objective of this Thesis is to provide a complete framework to analyze, evaluate and refactor DDF applications expressed using the RVC-CAL language. The methodology relies on a systematic design space exploration (DSE) examining different design alternatives in order to optimize the chosen objective function while satisfying the constraints. The research contributions start from a rigorous DSE problem formulation. This provides a basis for the definition of a complete and novel analysis methodology enabling systematic performance improvements of DDF applications. Different stages of the methodology include exploration heuristics, performance estimation and identification of refactoring directions. All of the stages are implemented as appropriate software tools. The contributions are substantiated by several experiments performed with complex dynamic applications on different types of physical platforms
    corecore