8 research outputs found

    Modelling and analysis of multi-scale streaming applications

    Get PDF

    Compositional dataflow modelling for cyclo-static applications

    No full text
    \u3cp\u3eModular design is a common practice when designing complex applications for embedded systems. Another important practice in the embedded systems domain is the use of abstract models to realize predictable behaviour. Modular model-based design allows to construct a modular model of a complex system via model composition. The model of computation considered in this paper is scenario-aware dataflow, a dataflow model that allows for dynamic behaviour. We model applications with behaviour that changes according to a periodic pattern. Composing models with periodic patterns results in a model with a periodic pattern with a common hyper-period. We propose an efficient algorithmic method to compose cyclo-static scenario-aware dataflow models by generating composite patterns in a concise representation. We show that our approach can automatically generate concise models of several real-life image processing applications.\u3c/p\u3

    Firmness analysis of real-time applications under static-priority preemptive scheduling

    No full text
    (m, k)-firm real-time tasks must meet the deadline of at least m jobs out of any k consecutive jobs to satisfy the firmness requirement. Scheduling of an (m,k)-firm task requires firmness analysis, whose results are used to provide system-level guarantees on the satisfaction of firmness conditions. We address firmness analysis of an (m, k)-firm task that is intended to be added to a set of asynchronous tasks scheduled under a Static-Priority Preemptive (SPP) policy. One of the main causes of deadline misses in periodic tasks running under an SPP policy is interference from higher priority tasks. Since the synchrony between the newly added task and higher priority tasks is unknown, the interference from the higher priority tasks is also unknown. We propose an analytic Firmness Analysis (FAn) method to obtain a synchrony that results in the maximum minimum number of deadline hit jobs in any k consecutive jobs of the task. Scalability of FAn is compared with that of existing work - a brute-force search approach - and a timed-automata model of the problem that is analysed using the reachability check of the Uppaal model checker. Our method substantially reduces the complexity of the analysis

    Scalable analysis for multi-scale dataflow models

    No full text
    Multi-scale dataflow models have actors acting at multiple granularity levels, e.g., a dataflow model of a video processing application with operations on frame, line, and pixel level. The state of the art timing analysis methods for both static and dynamic dataflow types aggregate the behaviours across all granularity levels into one, often large iteration, which is repeated without exploiting the structure within such an iteration. This poses scalability issues to dataflow analysis, because behaviour of the large iteration is analysed by some form of simulation that involves a large number of actor firings. We take a fresh perspective of what is happening inside the large iteration. We take advantage of the fact that the iteration is a sequence of smaller behaviours, each captured in a scenario, that are typically repeated many times. We use the (max ,+) linear model of dataflow to represent each of the scenarios with a matrix. This allows a compositional worst-case throughput analysis of the repeated scenarios by raising the matrices to the power of the number of repetitions, which scales logarithmically with the number of repetitions, whereas the existing throughput analysis scales linearly. We moreover provide the first exact worst-case latency analysis for scenario-aware dataflow. This compositional latency analysis also scales logarithmically when applied to multi-scale dataflow models. We apply our new throughput and latency analysis to several realistic applications. The results confirm that our approach provides a fast and accurate analysis

    Modeling and analysis of FPGA accelerators for real-time streaming video processing in the healthcare domain

    No full text
    \u3cp\u3eComplex real-time video processing applications with strict throughput constraints are commonly found in a typical healthcare application. The video processing chain is implemented as Field-Programmable Gate Array (FPGA) accelerators (processing blocks) communicating through a number of First-In First-Out (FIFO) buffers. The FIFO buffers are made out of Block RAM (BRAM) and limited in availability. Therefore, a key design question is the optimal sizes of the FIFO buffers with respect to the throughput constraint. In this paper, we use model-driven analysis and detailed hardware level simulation to address the question of buffer dimensioning in an efficient way. Using a Cyclo-Static Dataflow (CSDF) model and an optimization method, we identify and optimize the FIFO buffers. The results are confirmed using a detailed hardware level simulation and validated by comparison with VHDL simulations. The technique is illustrated on a use case from Philips Healthcare Image Guided Therapy (IGT) on the imaging pipeline of an Interventional X-Ray (i XR) system.\u3c/p\u3

    Monotonic optimization of dataflow buffer sizes

    No full text
    \u3cp\u3eMany high data-rate video-processing applications are subject to a trade-off between throughput and the sizes of buffers in the system (the storage distribution). These applications have strict requirements with respect to throughput as this directly relates to the functional correctness. Furthermore, the size of the storage distribution relates to resource usage which should be minimized in many practical cases. The computation kernels of high data-rate video-processing applications can often be specified by cyclo-static dataflow graphs. We therefore study the problem of minimization of the total (weighted) size of the storage distribution under a throughput constraint for cyclo-static dataflow graphs. By combining ideas from the area of monotonic optimization with the causal dependency analysis from a state-of-the-art storage optimization approach, we create an algorithm that scales better than the state-of-the-art approach. Our algorithm can provide a solution and a bound on the suboptimality of this solution at any time, and it iteratively improves this until the optimal solution is found. We evaluate our algorithm using several models from the literature, and on models of a high data-rate video-processing application from the healthcare domain. Our experiments show performance increases up to several orders of magnitude.\u3c/p\u3

    xCPS: a tool to explore cyber physical systems

    No full text
    Cyber-Physical Systems (CPS) play an important role in the modern high-tech industry. Designing such systems is an especially challenging task due to the multi-disciplinary nature of these systems, and the range of abstraction levels involved. To facilitate hands-on experience with such systems, we develop a cyber-physical platform that aids in both research and education on CPS. This paper describes this platform, which contains all typical CPS components. The platform is used in various research and education projects for bachelor, master, and PhD students. We discuss the platform and illustrate its use with a number of projects and the educational opportunities they provide
    corecore