41 research outputs found

    A model-based approach for the specification and refinement of streaming applications

    Get PDF
    Embedded systems can be found in a wide range of applications. Depending on the application, embedded systems must meet a wide range of constraints. Thus, designing and programming embedded systems is a challenging task. Here, model-based design flows can be a solution. This thesis proposes novel approaches for the specification and refinement of streaming applications. To this end, it focuses on dataflow models. As key result, the proposed dataflow model provides for a seamless model-based design flow from system level to the instruction/logic level for a wide range of streaming applications

    Modeling and Mapping of Optimized Schedules for Embedded Signal Processing Systems

    Get PDF
    The demand for Digital Signal Processing (DSP) in embedded systems has been increasing rapidly due to the proliferation of multimedia- and communication-intensive devices such as pervasive tablets and smart phones. Efficient implementation of embedded DSP systems requires integration of diverse hardware and software components, as well as dynamic workload distribution across heterogeneous computational resources. The former implies increased complexity of application modeling and analysis, but also brings enhanced potential for achieving improved energy consumption, cost or performance. The latter results from the increased use of dynamic behavior in embedded DSP applications. Furthermore, parallel programming is highly relevant in many embedded DSP areas due to the development and use of Multiprocessor System-On-Chip (MPSoC) technology. The need for efficient cooperation among different devices supporting diverse parallel embedded computations motivates high-level modeling that expresses dynamic signal processing behaviors and supports efficient task scheduling and hardware mapping. Starting with dynamic modeling, this thesis develops a systematic design methodology that supports functional simulation and hardware mapping of dynamic reconfiguration based on Parameterized Synchronous Dataflow (PSDF) graphs. By building on the DIF (Dataflow Interchange Format), which is a design language and associated software package for developing and experimenting with dataflow-based design techniques for signal processing systems, we have developed a novel tool for functional simulation of PSDF specifications. This simulation tool allows designers to model applications in PSDF and simulate their functionality, including use of the dynamic parameter reconfiguration capabilities offered by PSDF. With the help of this simulation tool, our design methodology helps to map PSDF specifications into efficient implementations on field programmable gate arrays (FPGAs). Furthermore, valid schedules can be derived from the PSDF models at runtime to adapt hardware configurations based on changing data characteristics or operational requirements. Under certain conditions, efficient quasi-static schedules can be applied to reduce overhead and enhance predictability in the scheduling process. Motivated by the fact that scheduling is critical to performance and to efficient use of dynamic reconfiguration, we have focused on a methodology for schedule design, which complements the emphasis on automated schedule construction in the existing literature on dataflow-based design and implementation. In particular, we have proposed a dataflow-based schedule design framework called the dataflow schedule graph (DSG), which provides a graphical framework for schedule construction based on dataflow semantics, and can also be used as an intermediate representation target for automated schedule generation. Our approach to applying the DSG in this thesis emphasizes schedule construction as a design process rather than an outcome of the synthesis process. Our approach employs dataflow graphs for representing both application models and schedules that are derived from them. By providing a dataflow-integrated framework for unambiguously representing, analyzing, manipulating, and interchanging schedules, the DSG facilitates effective codesign of dataflow-based application models and schedules for execution of these models. As multicore processors are deployed in an increasing variety of embedded image processing systems, effective utilization of resources such as multiprocessor systemon-chip (MPSoC) devices, and effective handling of implementation concerns such as memory management and I/O become critical to developing efficient embedded implementations. However, the diversity and complexity of applications and architectures in embedded image processing systems make the mapping of applications onto MPSoCs difficult. We help to address this challenge through a structured design methodology that is built upon the DSG modeling framework. We refer to this methodology as the DEIPS methodology (DSG-based design and implementation of Embedded Image Processing Systems). The DEIPS methodology provides a unified framework for joint consideration of DSG structures and the application graphs from which they are derived, which allows designers to integrate considerations of parallelization and resource constraints together with the application modeling process. We demonstrate the DEIPS methodology through cases studies on practical embedded image processing systems

    A Holistic Approach to Functional Safety for Networked Cyber-Physical Systems

    Get PDF
    Functional safety is a significant concern in today's networked cyber-physical systems such as connected machines, autonomous vehicles, and intelligent environments. Simulation is a well-known methodology for the assessment of functional safety. Simulation models of networked cyber-physical systems are very heterogeneous relying on digital hardware, analog hardware, and network domains. Current functional safety assessment is mainly focused on digital hardware failures while minor attention is devoted to analog hardware and not at all to the interconnecting network. In this work we believe that in networked cyber-physical systems, the dependability must be verified not only for the nodes in isolation but also by taking into account their interaction through the communication channel. For this reason, this work proposes a holistic methodology for simulation-based safety assessment in which safety mechanisms are tested in a simulation environment reproducing the high-level behavior of digital hardware, analog hardware, and network communication. The methodology relies on three main automatic processes: 1) abstraction of analog models to transform them into system-level descriptions, 2) synthesis of network infrastructures to combine multiple cyber-physical systems, and 3) multi-domain fault injection in digital, analog, and network. Ultimately, the flow produces a homogeneous optimized description written in C++ for fast and reliable simulation which can have many applications. The focus of this thesis is performing extensive fault simulation and evaluating different functional safety metrics, \eg, fault and diagnostic coverage of all the safety mechanisms

    Methoden und Beschreibungssprachen zur Modellierung und Verifikation vonSchaltungen und Systemen: MBMV 2015 - Tagungsband, Chemnitz, 03. - 04. MĂ€rz 2015

    Get PDF
    Der Workshop Methoden und Beschreibungssprachen zur Modellierung und Verifikation von Schaltungen und Systemen (MBMV 2015) findet nun schon zum 18. mal statt. Ausrichter sind in diesem Jahr die Professur Schaltkreis- und Systementwurf der Technischen UniversitĂ€t Chemnitz und das Steinbeis-Forschungszentrum Systementwurf und Test. Der Workshop hat es sich zum Ziel gesetzt, neueste Trends, Ergebnisse und aktuelle Probleme auf dem Gebiet der Methoden zur Modellierung und Verifikation sowie der Beschreibungssprachen digitaler, analoger und Mixed-Signal-Schaltungen zu diskutieren. Er soll somit ein Forum zum Ideenaustausch sein. Weiterhin bietet der Workshop eine Plattform fĂŒr den Austausch zwischen Forschung und Industrie sowie zur Pflege bestehender und zur KnĂŒpfung neuer Kontakte. Jungen Wissenschaftlern erlaubt er, ihre Ideen und AnsĂ€tze einem breiten Publikum aus Wissenschaft und Wirtschaft zu prĂ€sentieren und im Rahmen der Veranstaltung auch fundiert zu diskutieren. Sein langjĂ€hriges Bestehen hat ihn zu einer festen GrĂ¶ĂŸe in vielen Veranstaltungskalendern gemacht. Traditionell sind auch die Treffen der ITGFachgruppen an den Workshop angegliedert. In diesem Jahr nutzen zwei im Rahmen der InnoProfile-Transfer-Initiative durch das Bundesministerium fĂŒr Bildung und Forschung geförderte Projekte den Workshop, um in zwei eigenen Tracks ihre Forschungsergebnisse einem breiten Publikum zu prĂ€sentieren. Vertreter der Projekte Generische Plattform fĂŒr SystemzuverlĂ€ssigkeit und Verifikation (GPZV) und GINKO - Generische Infrastruktur zur nahtlosen energetischen Kopplung von Elektrofahrzeugen stellen Teile ihrer gegenwĂ€rtigen Arbeiten vor. Dies bereichert denWorkshop durch zusĂ€tzliche Themenschwerpunkte und bietet eine wertvolle ErgĂ€nzung zu den BeitrĂ€gen der Autoren. [... aus dem Vorwort

    Nouvelles approches pour la conception d'outils CAO pour le domaine des systÚmes embarqués

    Full text link
    ThÚse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal

    A Rapid Prototyping Tool for Embedded, Real-Time Hierarchical Control Systems

    Get PDF
    <p>Abstract</p> <p>Laboratory Virtual Instrumentation and Engineering Workbench (LabVIEW) is a graphical programming tool based on the dataflow language <b>G</b>. Recently, runtime support for a hard real-time environment has become available for LabVIEW, which makes it an option for embedded systems prototyping. Due to its characteristics, the environment presents itself as an ideal tool for both the design and implementation of embedded software. In this paper, we study the design and implementation of embedded software by using <b>G</b> as the specification language and the LabVIEW RT real-time platform. One of the main advantages of this approach is that the environment leads itself to a very smooth transition from design to implementation, allowing for powerful cosimulation strategies (e.g., hardware in the loop, runtime modeling). We characterize the semantics and formal model of computation of <b>G</b>. We compare it to other models of computation and develop design rules and algorithms to propose sound embedded design in the language. We investigate the specification and mapping of hierarchical control systems in LabVIEW and <b>G</b>. Finally, we describe the development of a state-of-the-art embedded motion control system using LabVIEW as the specification, simulation and implementation tool, using the proposed design principles. The solution is state-of-the-art in terms of flexibility and control performance.</p

    Systematic Design Space Exploration of Dynamic Dataflow Programs for Multi-core Platforms

    Get PDF
    The limitations of clock frequency and power dissipation of deep sub-micron CMOS technology have led to the development of massively parallel computing platforms. They consist of dozens or hundreds of processing units and offer a high degree of parallelism. Taking advantage of that parallelism and transforming it into high program performances requires the usage of appropriate parallel programming models and paradigms. Currently, a common practice is to develop parallel applications using methods evolving directly from sequential programming models. However, they lack the abstractions to properly express the concurrency of the processes. An alternative approach is to implement dataflow applications, where the algorithms are described in terms of streams and operators thus their parallelism is directly exposed. Since algorithms are described in an abstract way, they can be easily ported to different types of platforms. Several dataflow models of computation (MoCs) have been formalized so far. They differ in terms of their expressiveness (ability to handle dynamic behavior) and complexity of analysis. So far, most of the research efforts have focused on the simpler cases of static dataflow MoCs, where many analyses are possible at compile-time and several optimization problems are greatly simplified. At the same time, for the most expressive and the most difficult to analyze dynamic dataflow (DDF), there is still a dearth of tools supporting a systematic and automated analysis minimizing the programming efforts of the designer. The objective of this Thesis is to provide a complete framework to analyze, evaluate and refactor DDF applications expressed using the RVC-CAL language. The methodology relies on a systematic design space exploration (DSE) examining different design alternatives in order to optimize the chosen objective function while satisfying the constraints. The research contributions start from a rigorous DSE problem formulation. This provides a basis for the definition of a complete and novel analysis methodology enabling systematic performance improvements of DDF applications. Different stages of the methodology include exploration heuristics, performance estimation and identification of refactoring directions. All of the stages are implemented as appropriate software tools. The contributions are substantiated by several experiments performed with complex dynamic applications on different types of physical platforms

    Software tools for the rapid development of signal processing and communications systems on configurable platforms

    Get PDF
    Programmers and engineers in the domains of high performance computing (HPC) and electronic system design have a shared goal: to define a structure for coordination and communication between nodes in a highly parallel network of processing tasks. Practitioners in both of these fields have recently encountered additional constraints that motivate the use of multiple types of processing device in a hybrid or heterogeneous platform, but constructing a working "program" to be executed on such an architecture is very time-consuming with current domain-specific design methodologies. In the field of HPC, research has proposed solutions involving the use of alternative computational devices such as FPGAs (field-programmable gate arrays), since these devices can exhibit much greater performance per unit of power consumption. The appeal of integrating these devices into traditional microprocessor-based systems is mitigated, however, by the greater difficulty in constructing a system for the resulting hybrid platform. In the field of electronic system design, a similar problem of integration exists. Many of the highly parallel FPGA-based systems that Xilinx and its customers produce for applications such as telecommunications and video processing require the additional use of one or more microprocessors, but coordinating the interactions between existing FPGA cores and software running on the microprocessors is difficult. The aim of my project is to improve the design flow for hybrid systems by proposing, firstly, an abstract representation of these systems and their components which captures in metadata their different models of computation and communication; secondly, novel design checking, exploration and optimisation techniques based around this metadata; and finally, a novel design methodology in which component and system metadata is used to generate software simulation models. The effectiveness of this approach will be evaluated through the implementation of two physical-layer telecommunications system models that meet the requirements of the 3GPP "LTE" standard, which is commercially relevant to Xilinx and many other organisations

    Improving Model-Based Software Synthesis: A Focus on Mathematical Structures

    Get PDF
    Computer hardware keeps increasing in complexity. Software design needs to keep up with this. The right models and abstractions empower developers to leverage the novelties of modern hardware. This thesis deals primarily with Models of Computation, as a basis for software design, in a family of methods called software synthesis. We focus on Kahn Process Networks and dataïŹ‚ow applications as abstractions, both for programming and for deriving an eïŹƒcient execution on heterogeneous multicores. The latter we accomplish by exploring the design space of possible mappings of computation and data to hardware resources. Mapping algorithms are not at the center of this thesis, however. Instead, we examine the mathematical structure of the mapping space, leveraging its inherent symmetries or geometric properties to improve mapping methods in general. This thesis thoroughly explores the process of model-based design, aiming to go beyond the more established software synthesis on dataïŹ‚ow applications. We starting with the problem of assessing these methods through benchmarking, and go on to formally examine the general goals of benchmarks. In this context, we also consider the role modern machine learning methods play in benchmarking. We explore different established semantics, stretching the limits of Kahn Process Networks. We also discuss novel models, like Reactors, which are designed to be a deterministic, adaptive model with time as a ïŹrst-class citizen. By investigating abstractions and transformations in the Ohua language for implicit dataïŹ‚ow programming, we also focus on programmability. The focus of the thesis is in the models and methods, but we evaluate them in diverse use-cases, generally centered around Cyber-Physical Systems. These include the 5G telecommunication standard, automotive and signal processing domains. We even go beyond embedded systems and discuss use-cases in GPU programming and microservice-based architectures
    corecore