508 research outputs found
Visual Programming: Concepts and Implementations
The computing environment has changed dramatically since the advent of the computer. Enhanced computer graphics and sheer processing power have ushered in a new age of computing. User interfaces have advanced from simple line entry to powerful graphical interfaces. With these advances, computer languages are no longer forced to be sequentially and textually-based. A new programming paradigm has evolved to harness the power of today's computing environment - visual programming. Visual programming provides the user with visible models which reflect physical objects. By connecting these visible models to each other, an executable program is created. By removing the inherent abstractions of textual languages, visual programming could lead computing into a new era
Configurable computer systems can support dataflow computing
This work presents a practical implementation of a uni-processor system design. This design, named D2-CPU, satisfies the pure data-driven paradigm, which is a radical alternative to the conventional von Neumann paradigm and exploits the instruction-level parallelism to its full extent. The D2-CPU uses the natural flow of the program, dataflow, by minimizing redundant instructions like fetch, store, and write back. This leads to a design with the better performance, lower power consumption and efficient use of the on-chip resources. This extraordinary performance is the result of a simple, pipelined and superscalar architecture with a very wide data bus and a completely out of order execution of instructions. This creates a program counter less, distributed controlled system design with the realization of intelligent memories. Upon the availability of data, the instructions advance further in the memory hierarchy and ultimately to the execution units by themselves, instead of having the CPU fetch the required instructions from the memory as in controlled flow processors. This application (data) oriented execution process is in contrast to application ignorant CPUs in conventional machines. The D2-CPU solves current architectural challenges and puts into practice a pure data-driven microprocessor. This work employs an FPGA implementation of the D2-CPU to prove the practicability of the data-driven computer paradigm using configurable logic. A relative analysis at the end confirms its superiority in performance, resource utilization and ease of programming over conventional CPUs
The DS-Pnet modeling formalism for cyber-physical system development
This work presents the DS-Pnet modeling formalism (Dataflow, Signals and Petri nets), designed for the development of cyber-physical systems, combining the characteristics of Petri nets and dataflows to support the modeling of mixed systems containing both reactive parts and data processing operations. Inheriting the features of the parent IOPT Petri net class, including an external interface composed of input and output signals and events, the addition of dataflow operations brings enhanced modeling capabilities to specify mathematical data transformations and graphically express the dependencies between signals. Data-centric systems, that do not require reactive controllers, are designed using pure dataflow models.
Component based model composition enables reusing existing components, create libraries of previously tested components and hierarchically decompose complex systems into smaller sub-systems.
A precise execution semantics was defined, considering the relationship between dataflow and Petri net nodes, providing an abstraction to define the interface between reactive controllers and input and output signals, including analog sensors and actuators.
The new formalism is supported by the IOPT-Flow Web based tool framework, offering tools to design and edit models, simulate model execution on the Web browser, plus model-checking and software/hardware automatic code generation tools to implement controllers running on embedded devices (C,VHDL and JavaScript).
A new communication protocol was created to permit the automatic implementation of distributed cyber-physical systems composed of networks of remote components communicating over the Internet. The editor tool connects directly to remote embedded devices running DS-Pnet models and may import remote components into new models, contributing to simplify the creation of distributed cyber-physical applications, where the communication between distributed components is specified just by drawing arcs.
Several application examples were designed to validate the proposed formalism and the associated framework, ranging from hardware solutions, industrial applications to distributed software applications
Emulation of the dataflow computing paradigm using field programmable gate arrays (FPGAs)
Building a perfect dataflow computer has been an endeavor of many computer engineers. Ideally, it is a perfect parallel machine with zero overheads, but implementing one has been anything but perfect. While the sequential nature of control flow machines makes them relatively easy to implement, dataflow machines have to address a number of issues that are easily solved in the realm of control flow paradigm. Past implementations of dataflow computers have addressed these issues, such as conditional and reentrant program structures, along with the flow of data, at the processor level, i.e. each processor in the design would handle these issues. The design presented in this thesis solves these issues at the memory level (by using intelligent-memory), separating the processor from dataflow tasks. Specifically, a two-level memory design, along with a pool of processors was prototyped on a group of Altera FPGAs.
The first level of memory is an intelligent-memory called Dataflow Memory (DFM), carrying out dataflow tasks. The second level of memory called the Instruction Queue (IQ) is a buffer that queues instructions ready for execution, sent by the DFM. The second level memory has a multiple bank architecture that allows multiple processors from the processor pool to simultaneously execute instructions retrieved from the banks. After executing an instruction, each processor sends the result back to the dataflow memory, where they fire new instructions and send them to the IQ.
This thesis shows that implementing dataflow computers at the intelligent-memory level is a viable alternative to implementing them at the processor level
Development and Specification of Virtual Environments
This thesis concerns the issues involved in the development of virtual environments (VEs). VEs are more than virtual reality. We identify four main characteristics of them: graphical interaction, multimodality, interface agents, and multi-user. These characteristics are illustrated with an overview of different classes of VE-like applications, and a number of state-of-the-art VEs. To further define the topic of research, we propose a general framework for VE systems development, in which we identify five major classes of development tools: methodology, guidelines, design specification, analysis, and development environments. Of each, we give an overview of existing best practices
λ³λ ¬ λ° λΆμ° μλ² λλ μμ€ν μ μν λͺ¨λΈ κΈ°λ° μ½λ μμ± νλ μμν¬
νμλ
Όλ¬Έ(λ°μ¬)--μμΈλνκ΅ λνμ :곡과λν μ»΄ν¨ν°κ³΅νλΆ,2020. 2. νμν.μννΈμ¨μ΄ μ€κ³ μμ°μ± λ° μ μ§λ³΄μμ±μ ν₯μμν€κΈ° μν΄ λ€μν μννΈμ¨μ΄ κ°λ° λ°©λ²λ‘ μ΄ μ μλμμ§λ§, λλΆλΆμ μ°κ΅¬λ μμ© μννΈμ¨μ΄λ₯Ό νλμ νλ‘μΈμμμ λμμν€λ λ°μ μ΄μ μ λ§μΆκ³ μλ€. λν, μλ² λλ μμ€ν
μ κ°λ°νλ λ°μ νμν μ§μ°μ΄λ μμ μꡬ μ¬νμ λν λΉκΈ°λ₯μ μꡬ μ¬νμ κ³ λ €νμ§ μκ³ μκΈ° λλ¬Έμ μΌλ°μ μΈ μννΈμ¨μ΄ κ°λ° λ°©λ²λ‘ μ μλ² λλ μννΈμ¨μ΄λ₯Ό κ°λ°νλ λ°μ μ μ©νλ κ²μ μ ν©νμ§ μλ€.
μ΄ λ
Όλ¬Έμμλ λ³λ ¬ λ° λΆμ° μλ² λλ μμ€ν
μ λμμΌλ‘ νλ μννΈμ¨μ΄λ₯Ό λͺ¨λΈλ‘ νννκ³ , μ΄λ₯Ό μννΈμ¨μ΄ λΆμμ΄λ κ°λ°μ νμ©νλ κ°λ° λ°©λ²λ‘ μ μκ°νλ€. μ°λ¦¬μ λͺ¨λΈμμ μμ© μννΈμ¨μ΄λ κ³μΈ΅μ μΌλ‘ ννν μ μλ μ¬λ¬ κ°μ νμ€ν¬λ‘ μ΄λ£¨μ΄μ Έ μμΌλ©°, νλμ¨μ΄ νλ«νΌκ³Ό λ
립μ μΌλ‘ λͺ
μΈνλ€. νμ€ν¬ κ°μ ν΅μ λ° λκΈ°νλ λͺ¨λΈμ΄ μ μν κ·μ½μ΄ μ ν΄μ Έ μκ³ , μ΄λ¬ν κ·μ½μ ν΅ν΄ μ€μ νλ‘κ·Έλ¨μ μ€ννκΈ° μ μ μννΈμ¨μ΄ μλ¬λ₯Ό μ μ λΆμμ ν΅ν΄ νμΈν μ μκ³ , μ΄λ μμ©μ κ²μ¦ 볡μ‘λλ₯Ό μ€μ΄λ λ°μ κΈ°μ¬νλ€. μ§μ ν νλμ¨μ΄ νλ«νΌμμ λμνλ νλ‘κ·Έλ¨μ νμ€ν¬λ€μ νλ‘μΈμμ 맀νν μ΄νμ μλμ μΌλ‘ ν©μ±ν μ μλ€.
μμ λͺ¨λΈ κΈ°λ° μννΈμ¨μ΄ κ°λ° λ°©λ²λ‘ μμ μ¬μ©νλ νλ‘κ·Έλ¨ ν©μ±κΈ°λ₯Ό λ³Έ λ
Όλ¬Έμμ μ μνμλλ°, λͺ
μΈν νλ«νΌ μꡬ μ¬νμ λ°νμΌλ‘ λ³λ ¬ λ° λΆμ° μλ² λλ μμ€ν
μμμ λμνλ μ½λλ₯Ό μμ±νλ€. μ¬λ¬ κ°μ μ νμ λͺ¨λΈλ€μ κ³μΈ΅μ μΌλ‘ νννμ¬ μμ©μ λμ ννλ₯Ό λνκ³ , ν©μ±κΈ°λ μ¬λ¬ λͺ¨λΈλ‘ ꡬμ±λ κ³μΈ΅μ μΈ λͺ¨λΈλ‘λΆν° λ³λ ¬μ±μ κ³ λ €νμ¬ νμ€ν¬λ₯Ό μ€νν μ μλ€. λν, νλ‘κ·Έλ¨ ν©μ±κΈ°μμ λ€μν νλ«νΌμ΄λ λ€νΈμν¬λ₯Ό μ§μν μ μλλ‘ μ½λλ₯Ό κ΄λ¦¬νλ λ°©λ²λ 보μ¬μ£Όκ³ μλ€. λ³Έ λ
Όλ¬Έμμ μ μνλ μννΈμ¨μ΄ κ°λ° λ°©λ²λ‘ μ 6κ°μ νλμ¨μ΄ νλ«νΌκ³Ό 3 μ’
λ₯μ λ€νΈμν¬λ‘ ꡬμ±λμ΄ μλ μ€μ κ°μ μννΈμ¨μ΄ μμ€ν
μμ© μμ μ μ΄μ’
λ©ν° νλ‘μΈμλ₯Ό νμ©νλ μ격 λ₯ λ¬λ μμ λ₯Ό μννμ¬ κ°λ° λ°©λ²λ‘ μ μ μ© κ°λ₯μ±μ μννμλ€. λν, νλ‘κ·Έλ¨ ν©μ±κΈ°κ° μλ‘μ΄ νλ«νΌμ΄λ λ€νΈμν¬λ₯Ό μ§μνκΈ° μν΄ νμλ‘ νλ κ°λ° λΉμ©λ μ€μ μΈ‘μ λ° μμΈ‘νμ¬ μλμ μΌλ‘ μ μ λ
Έλ ₯μΌλ‘ μλ‘μ΄ νλ«νΌμ μ§μν μ μμμ νμΈνμλ€.
λ§μ μλ² λλ μμ€ν
μμ μμμΉ λͺ»ν νλμ¨μ΄ μλ¬μ λν΄ κ²°ν¨μ κ°λ΄νλ κ²μ νμλ‘ νκΈ° λλ¬Έμ κ²°ν¨ κ°λ΄μ λν μ½λλ₯Ό μλμΌλ‘ μμ±νλ μ°κ΅¬λ μ§ννμλ€. λ³Έ κΈ°λ²μμ κ²°ν¨ κ°λ΄ μ€μ μ λ°λΌ νμ€ν¬ κ·Έλνλ₯Ό μμ νλ λ°©μμ νμ©νμμΌλ©°, κ²°ν¨ κ°λ΄μ λΉκΈ°λ₯μ μꡬ μ¬νμ μμ© κ°λ°μκ° μ½κ² μ μ©ν μ μλλ‘ νμλ€. λν, κ²°ν¨ κ°λ΄ μ§μνλ κ²κ³Ό κ΄λ ¨νμ¬ μ€μ μλμΌλ‘ ꡬννμ κ²½μ°μ λΉκ΅νμκ³ , κ²°ν¨ μ£Όμ
λꡬλ₯Ό μ΄μ©νμ¬ κ²°ν¨ λ°μ μλ리μ€λ₯Ό μ¬ννκ±°λ, μμλ‘ κ²°ν¨μ μ£Όμ
νλ μ€νμ μννμλ€.
λ§μ§λ§μΌλ‘ κ²°ν¨ κ°λ΄λ₯Ό μ€νν λμ νμ©ν κ²°ν¨ μ£Όμ
λꡬλ λ³Έ λ
Όλ¬Έμ λ λ€λ₯Έ κΈ°μ¬ μ¬ν μ€ νλλ‘ λ¦¬λ
μ€ νκ²½μΌλ‘ λμμΌλ‘ μμ© μμ λ° μ»€λ μμμ κ²°ν¨μ μ£Όμ
νλ λꡬλ₯Ό κ°λ°νμλ€. μμ€ν
μ κ²¬κ³ μ±μ κ²μ¦νκΈ° μν΄ κ²°ν¨μ μ£Όμ
νμ¬ κ²°ν¨ μλ리μ€λ₯Ό μ¬ννλ κ²μ λ리 μ¬μ©λλ λ°©λ²μΌλ‘, λ³Έ λ
Όλ¬Έμμ κ°λ°λ κ²°ν¨ μ£Όμ
λꡬλ μμ€ν
μ΄ λμνλ λμ€μ μ¬ν κ°λ₯ν κ²°ν¨μ μ£Όμ
ν μ μλ λꡬμ΄λ€. 컀λ μμμμμ κ²°ν¨ μ£Όμ
μ μν΄ λ μ’
λ₯μ κ²°ν¨ μ£Όμ
λ°©λ²μ μ 곡νλ©°, νλλ 컀λ GNU λλ²κ±°λ₯Ό μ΄μ©ν λ°©λ²μ΄κ³ , λ€λ₯Έ νλλ ARM νλμ¨μ΄ λΈλ μ΄ν¬ν¬μΈνΈλ₯Ό νμ©ν λ°©λ²μ΄λ€. μμ© μμμμ κ²°ν¨μ μ£Όμ
νκΈ° μν΄ GDB κΈ°λ° κ²°ν¨ μ£Όμ
λ°©λ²μ μ΄μ©νμ¬ λμΌ μμ€ν
νΉμ μ격 μμ€ν
μ μμ©μ κ²°ν¨μ μ£Όμ
ν μ μλ€. κ²°ν¨ μ£Όμ
λꡬμ λν μ€νμ ODROID-XU4 보λμμ μ§ννμλ€.While various software development methodologies have been proposed to increase the design productivity and maintainability of software, they usually focus on the development of application software running on a single processing element, without concern about the non-functional requirements of an embedded system such as latency and resource requirements.
In this thesis, we present a model-based software development method for parallel and distributed embedded systems. An application is specified as a set of tasks that follow a set of given rules for communication and synchronization in a hierarchical fashion, independently of the hardware platform. Having such rules enables us to perform static analysis to check some software errors at compile time to reduce the verification difficulty. Platform-specific program is synthesized automatically after mapping of tasks onto processing elements is determined.
The program synthesizer is also proposed to generate codes which satisfies platform requirements for parallel and distributed embedded systems. As multiple models which can express dynamic behaviors can be depicted hierarchically, the synthesizer supports to manage multiple task graphs with a different hierarchy to run tasks with parallelism. Also, the synthesizer shows methods of managing codes for heterogeneous platforms and generating various communication methods. The viability of the proposed software development method is verified with a real-life surveillance application that runs on six processing elements with three remote communication methods, and remote deep learning example is conducted to use heterogeneous multiprocessing components on distributed systems. Also, supporting a new platform and network requires a small effort by measuring and estimating development costs.
Since tolerance to unexpected errors is a required feature of many embedded systems, we also support an automatic fault-tolerant code generation. Fault tolerance can be applied by modifying the task graph based on the selected fault tolerance configurations, so the non-functional requirement of fault tolerance can be easily adopted by an application developer. To compare the effort of supporting fault tolerance, manual implementation of fault tolerance is performed. Also, the fault tolerance method is tested with the fault injection tool to emulate fault scenarios and inject faults randomly.
Our fault injection tool, which has used for testing our fault-tolerance method, is another work of this thesis. Emulating fault scenarios by intentionally injecting faults is commonly used to test and verify the robustness of a system. To emulate faults on an embedded system, we present a run-time fault injection framework that can inject a fault on both a kernel and application layer of Linux-based systems. For injecting faults on a kernel layer, two complementary fault injection techniques are used. One is based on Kernel GNU Debugger, and the other is using a hardware breakpoint supported by the ARM architecture. For application-level fault injection, the GDB-based fault injection method is used to inject a fault on a remote application. The viability of the proposed fault injection tool is proved by real-life experiments with an ODROID-XU4 system.Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Contribution 6
1.3 Dissertation Organization 8
Chapter 2 Background 9
2.1 HOPES: Hope of Parallel Embedded Software 9
2.1.1 Software Development Procedure 9
2.1.2 Components of HOPES 12
2.2 Universal Execution Model 13
2.2.1 Task Graph Specification 13
2.2.2 Dataflow specification of an Application 15
2.2.3 Task Code Specification and Generic APIs 21
2.2.4 Meta-data Specification 23
Chapter 3 Program Synthesis for Parallel and Distributed Embedded Systems 24
3.1 Motivational Example 24
3.2 Program Synthesis Overview 26
3.3 Program Synthesis from Hierarchically-mixed Models 30
3.4 Platform Code Synthesis 33
3.5 Communication Code Synthesis 36
3.6 Experiments 40
3.6.1 Development Cost of Supporting New Platforms and Networks 40
3.6.2 Program Synthesis for the Surveillance System Example 44
3.6.3 Remote GPU-accelerated Deep Learning Example 46
3.7 Document Generation 48
3.8 Related Works 49
Chapter 4 Model Transformation for Fault-tolerant Code Synthesis 56
4.1 Fault-tolerant Code Synthesis Techniques 56
4.2 Applying Fault Tolerance Techniques in HOPES 61
4.3 Experiments 62
4.3.1 Development Cost of Applying Fault Tolerance 62
4.3.2 Fault Tolerance Experiments 62
4.4 Random Fault Injection Experiments 65
4.5 Related Works 68
Chapter 5 Fault Injection Framework for Linux-based Embedded Systems 70
5.1 Background 70
5.1.1 Fault Injection Techniques 70
5.1.2 Kernel GNU Debugger 71
5.1.3 ARM Hardware Breakpoint 72
5.2 Fault Injection Framework 74
5.2.1 Overview 74
5.2.2 Architecture 75
5.2.3 Fault Injection Techniques 79
5.2.4 Implementation 83
5.3 Experiments 90
5.3.1 Experiment Setup 90
5.3.2 Performance Comparison of Two Fault Injection Methods 90
5.3.3 Bit-flip Fault Experiments 92
5.3.4 eMMC Controller Fault Experiments 94
Chapter 6 Conclusion 97
Bibliography 99
μ μ½ 108Docto
A Visual Programming Language for Data Flow Systems
The concept of visual programming languages is described and some necessary terms are defined. The value of visual languages is presented and a number of different visual languages are described. Various issues, such as user interface design, are discussed. As an example of a visual programming language, a graphical data flow programming environment is developed for the Macintosh workstation which functions as a preprocessor to a data flow simulator developed at RIT. Examples are presented demonstrating the use of the language environment. Issues related to the development of the programming environment are described and conclusions regarding the development of visual programming languages in general are presented
Fine-Grain Parallelism
Computer hardware is at the beginning of the multi-core revolution. While hardware at the commodity level is capable of running concurrent software, most software does not take advantage of this fact because parallel software development is difficult. This project addressed potential remedies to these difficulties by investigating graphical programming and fine-grain parallelism. A prototype system taking advantage of both of these concepts was implemented and evaluated in terms of real-world applications
An Introduction to Transient Engine Applications Using the Numerical Propulsion System Simulation (NPSS) and MATLAB
This document outlines methodologies designed to improve the interface between the Numerical Propulsion System Simulation framework and various control and dynamic analyses developed in the Matlab and Simulink environment. Although NPSS is most commonly used for steady-state modeling, this paper is intended to supplement the relatively sparse documentation on it's transient analysis functionality. Matlab has become an extremely popular engineering environment, and better methodologies are necessary to develop tools that leverage the benefits of these disparate frameworks. Transient analysis is not a new feature of the Numerical Propulsion System Simulation (NPSS), but transient considerations are becoming more pertinent as multidisciplinary trade-offs begin to play a larger role in advanced engine designs. This paper serves to supplement the relatively sparse documentation on transient modeling and cover the budding convergence between NPSS and Matlab based modeling toolsets. The following sections explore various design patterns to rapidly develop transient models. Each approach starts with a base model built with NPSS, and assumes the reader already has a basic understanding of how to construct a steady-state model. The second half of the paper focuses on further enhancements required to subsequently interface NPSS with Matlab codes. The first method being the simplest and most straightforward but performance constrained, and the last being the most abstract. These methods aren't mutually exclusive and the specific implementation details could vary greatly based on the designer's discretion. Basic recommendations are provided to organize model logic in a format most easily amenable to integration with existing Matlab control toolsets
Semantics-driven dataflow diagram processing.
Dataflow diagram is a commonly used tool of structured analysis and design techniques in specifications and design of a software system, and in analysis of an existing system as well. While automatic generating dataflow diagram saves system designers from tedious drawing and help them develop a new system, simulating dataflow diagrams provides system analysts with a dynamic graph and help them understand an existing system. CASE tools for dataflow diagrams play an important role in software engineering. Methodologies applied to the tools are dominant issues extensively evaluated by tools designers. Executable specifications with dataflow diagrams turn out an opportunity to execute graphic dataflow diagrams for systems analysts to simulate the behavior of a system. In this thesis, a syntax representation of dataflow diagram was developed, and a formal specification for dataflow diagram was established. A parser of this developed CASE tool translates the syntax representation of DFDs into their semantic representation. An interpreter of this tool then analyzes the DFDs semantic notations and builds a set of services of a system represented by the DFDs. This CASE tool can be used to simulate system behavior, check equivalence of two systems and detect deadlock. Based on its features, this tool can be used in every phase through entire software life cycle. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1998 .Z46. Source: Masters Abstracts International, Volume: 39-02, page: 0535. Adviser: Indra A. Tjandra. Thesis (M.Sc.)--University of Windsor (Canada), 1998
- β¦