56 research outputs found

    Reduction techniques for synchronous dataflow graphs.

    Get PDF
    The Synchronous Dataflow (SDF) model of computation is popular for modelling the timing behaviour of real-time embedded hardware and software systems and applications. It is an essential ingredient of several automated design-flows and design-space exploration tools. The model can be analysed for throughput and latency properties. Although the SDF model is fairly simple, the analysis algorithms are often of high complexity and the models that need to be analysed may be fairly large. This paper introduces two graph transformations for reducing large SDF graphs into simpler, smaller ones that can be analysed more efficiently and give a conservative and often tight estimation of the timing of the original model and hence of the hard real-time system. We can make SDF based methods more efficient and prove that analyses that were done manually in an ad-hoc fashion in the past, can be done automatically and with guaranteed correctness. Additionally we introduce a novel conversion from SDF to Homogeneous SDF, a step applied in many analysis methods for SDF, which yields an up to 250X improvement on the number of actors, thus mitigating the problems with the size explosion observed in the traditional conversion

    An improved on-the-fly tableau construction for a real-time temporal logic

    No full text
    Temporal logic is popular for specifying correctness properties of reactive systems. Real-time temporal logics add the ability to express quantitative timing aspects. Tableau constructions are algorithms that translate a temporal logic formula into a finite-state automaton that accepts precisely all the models of the formula. On-the-fly versions of tableau-constructions enable their practical application for model-checking. In a previous paper we presented an on-the-fly tableaux construction for a fragment of Metric Interval Temporal Logic in dense time. The interpretation of the logic was constrained to a special kind of timed state sequences, with intervals that are always left-closed and right-open. In this paper we present a non-trivial extension to this tableau construction for the same logic fragment, that lifts this constraint

    Compositionality in scenario-Aware dataflow:A rendezvous perspective

    No full text
    \u3cp\u3eFinite-state machine-based scenario-aware dataflow (FSM-SADF) is a dynamic dataflow model of computation that combines streaming data and finite-state control. For the most part, it preserves the determinism of its underlying synchronous dataflow (SDF) concurrency model and only when necessary introduces the non-deterministic variation in terms of scenarios that are represented by SDF graphs. This puts FSM-SADF in a sweet spot in the trade-off space between expressiveness and analyzability. However, FSM-SADF supports no notion of compositionality, which hampers its usability in modeling and consequent analysis of large systems. In this work we propose a compositional semantics for FSM-SADF that overcomes this problem. We base the semantics of the composition on standard composition of processes with rendezvous communication in the style of CCS or CSP at the control level and the parallel, serial and feedback composition of SDF graphs at the dataflow level. We evaluate the approach on a case study from the multimedia domain.\u3c/p\u3

    On the discrete Gabor transform and the discrete Zak transform

    No full text
    Gabor's expansion of a discrete-time signal into a set of shifted and modulated versions of an elementary signal (or synthesis window) and the inverse operation -- the Gabor transform -- with which Gabor's expansion coefficients can be determined, are introduced. It is shown how, in the case of a finite-support analysis window and with the help of an overlap-add technique, the discrete Gabor transform can be used to determine Gabor's expansion coefficients for a signal whose support is not finite. The discrete Zak transform is introduced and it is shown how this transform, together with the discrete Fourier transform, can be used to represent the discrete Gabor transform and the discrete Gabor expansion in sum-of-products forms. It is shown how the sum-of-products form of the Gabor transform enables us to determine Gabor's expansion coefficients in a different way, in which fast algorithms can be applied. Using the sum-of-products forms, a relationship between the analysis window and the synthesis window is derived. It is shown how this relationship enables us to determine the optimum synthesis window in the sense that it has minimum L2 norm, and it is shown that this optimum synthesis window resembles best the analysis window

    Towards component-based (max,+) algebraic throughput analysis of hierarchical synchronous data flow models

    No full text
    \u3cp\u3eSynchronous (or static) dataflow (SDF) is deemed the most stable and mature model to represent streaming systems. It is useful, not only to reason about functional behavior and correctness of such systems, but also about non-functional aspects, in particular timing and performance constraints. When talking about performance, throughput is a key metric. Within the SDF domain, hierarchical SDF models are of special interest as they enable compositional modeling, which is a necessity in the design of large systems. Techniques exist to analyze throughput of synchronous dataflow models. If the model is hierarchical, it first needs to be flattened before these techniques can be applied (for exact analysis at least). Furthermore, all of these techniques are adversely affected by the increase in the graph’s repetition vector entries. In this paper, for a loosely defined class of hierarchical synchronous dataflow models, we argue that these dependence issues can be mitigated by taking advantage of the hierarchical structure rather than by flattening the graph. We propose a hierarchical extension to an existing technique that is based on the (max,+) algebraic semantics of SDF\u3c/p\u3

    Requirements on the execution of Kahn process networks

    No full text
    Kahn process networks (KPNs) are a programming paradigm suitable for streaming-based multimedia and signal-processing applications. We discuss the execution of KPNs, and the criteria for correct scheduling of their realisations. In [12], Parks shows how process networks can be scheduled in bounded memory; the proposed method is used in many implementations of KPNs. However, it does not result in the correct behaviour for all KPNs. We investigate the requirements for a scheduler to guarantee both correct and bounded execution of KPNs and present an improved scheduling strategy that satisfies them

    It's a matter of time:modeling and analysis of time dependent systems using scenario-aware dataflow

    No full text
    \u3cp\u3eFinite-state machine-based scenario-aware dataflow (FSM-SADF) is a dynamic non-deterministic dataflow model of computation that combines streaming data and finite-state control. However, FSM-SADF in its current state cannot be used in applications involving modeling and analysis of systems whose behavior depends on explicit values of timestamp of events. In this work we propose a compositional semantics for FSM-SADF that enables FSM-SADF to be used in modeling and analysis of such systems. We base the semantics of the composition on standard composition of processes with conditional rendezvous communication at the control level and the compositions of SDF graphs at the dataflow level. We evaluate the approach on a case study from the multimedia domain in the context of first come, first served schedulers.\u3c/p\u3

    The earlier the better : a theory of timed actor interfaces

    No full text
    Programming embedded and cyber-physical systems requires attention not only to functional behavior and correctness, but also to non-functional aspects and specifically timing and performance constraints. A structured, compositional, model-based approach based on stepwise refinement and abstraction techniques can support the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. For this purpose, we introduce in this paper a general theory of timed actor interfaces. Our theory supports a notion of refinement that is based on the principle of worst-case design that permeates the world of performance-critical systems. This is in contrast with the classical behavioral and functional refinements based on restricting or enlarging sets of behaviors. An important feature of our refinement is that it allows time-deterministic abstractions to be made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis. We also show how our theory relates to, and can be used to reconcile a number of existing time and performance models and how their established theories can be exploited to represent and analyze interface specifications and refinement steps

    Symbolic analysis of dataflow applications mapped onto shared heterogeneous resources

    No full text
    Embedded streaming applications require design-time temporal analysis to verify real-time constraints such as throughput and latency. In this paper, we introduce a new analytical technique to compute temporal bounds of streaming applications mapped onto a shared multiprocessor platform. We use an expressively rich application model that supports adaptive applications where graph structure, execution times and data rates may change dynamically. The analysis technique combines symbolic simulation in (max, +) algebra with worst-case resource availability curves. It further enables a tighter performance guarantee by improving the WCRTs of service requests that arrive in the same busy time. Evaluation on real-life application graphs shows that the technique is tens of times faster than the state-of-the-art and enables tighter throughput guarantees, up to a factor of 4, compared to the typical worst-case analysis

    End-to-end latency analysis of dataflow scenarios mapped onto shared heterogeneous resources

    Get PDF
    The design of embedded wireless and multimedia applications requires temporal analysis to verify if real-time constraints such as throughput and latency are met. This paper presents a design-time analytical approach to derive a conservative upper-bound to the maximum end-to-end latency of a streaming application. Existing analytical approaches often assume static application models, which cannot cope with the data-dependent execution of dynamic streaming applications. Consequently, they give overly pessimistic upper-bounds. In this work, we use an expressively richer dataflow model of computation as an application model. The model supports adaptive applications that change their graph structure, execution times and data rates, depending on their mode of operation, or scenario. We first formalize the latency analysis problem in the presence of dynamically switching scenarios. We characterize each scenario with a compact matrix in (max, +) algebra using a symbolic execution of one graph iteration. The resulting matrices are then composed to derive a bound to the end-to-end latency under a periodic source. Aperiodic sources such as sporadic streams can be analyzed by reduction to a periodic reference. We demonstrate the applicability of the technique with dataflow models from the wireless application domain. Moreover, the method is illustrated with a trade-off analysis in resource reservation under a throughput constraint. The evaluation shows that the approach has a low run-time, which enables it to be effectively integrated in multiprocessor design-flows of streaming applications
    • …
    corecore