60 research outputs found

    Efficient Software Implementation of Stream Programs

    Get PDF
    The way we use computers and mobile phones today requires large amounts of processing of data streams. Examples include digital signal processing for wireless transmission, audio and video coding for recording and watching videos, and noise reduction for the phone calls. These tasks can be performed by stream programs—computer programs that process streams of data. Stream programs can be composed of other stream programs. Components of a composition are connected in a network, i.e. the output streams of one component are sent as input streams to other components. The components, that perform the actual computation, are called kernels. They can be described in different styles and programming languages. There are also formal models for describing the kernels and the networks. One such model is the actor machine.This dissertation evaluates the actor machine, how it facilitates creating efficient software implementation of stream programs. The evaluation is divided into four aspects: (1) analyzability of its structure, (2) generality in what languages and styles it can express, (3) efficient implementation of kernels, and (4) efficient implementation of networks. This dissertation demonstrates all four aspects through implementation and evaluation of a stream program compiler based on actor machines

    Architectural Data Flow Analysis for Detecting Violations of Confidentiality Requirements

    Get PDF
    Software vendors must consider confidentiality especially while creating software architectures because decisions made here are hard to change later. Our approach represents and analyzes data flows in software architectures. Systems specify data flows and confidentiality requirements specify limitations of data flows. Software architects use detected violations of these limitations to improve the system. We demonstrate how to integrate our approach into existing development processes

    Architectural Data Flow Analysis for Detecting Violations of Confidentiality Requirements

    Get PDF
    Software vendors must consider confidentiality especially while creating software architectures because decisions made here are hard to change later. Our approach represents and analyzes data flows in software architectures. Systems specify data flows and confidentiality requirements specify limitations of data flows. Software architects use detected violations of these limitations to improve the system. We demonstrate how to integrate our approach into existing development processes

    Architectural Data Flow Analysis for Detecting Violations of Confidentiality Requirements

    Get PDF
    Diese Arbeit prĂ€sentiert einen Ansatz zur systematischen BerĂŒcksichtigung von Vertraulichkeitsanforderungen in Softwarearchitekturen mittels Abbildung und Analyse von DatenflĂŒssen. Die StĂ€rkung von Datenschutzregularien, wie bspw. durch die europĂ€ische Datenschutzgrundverordnung (DSGVO), und die Reaktionen der Bevölkerung auf Datenskandale, wie bspw. den Skandal um Cambridge Analytica, haben gezeigt, dass die Wahrung von Vertraulichkeit fĂŒr Organisationen von essentieller Bedeutung ist. Um Vertraulichkeit zu wahren, muss diese wĂ€hrend des gesamten Softwareentwicklungsprozesses berĂŒcksichtigt werden. FrĂŒhe Entwicklungsphasen benötigen hier insbesondere große Beachtung, weil ein betrĂ€chtlicher Anteil an spĂ€teren Problemen auf Fehler in diesen frĂŒhen Entwicklungsphasen zurĂŒckzufĂŒhren ist. Hinzu kommt, dass der Aufwand zum Beseitigen von Fehlern aus der Softwarearchitektur in spĂ€teren Entwicklungsphasen ĂŒberproportional steigt. Um Verletzungen von Vertraulichkeitsanforderungen zu erkennen, werden in frĂŒheren Entwicklungsphasen hĂ€ufig datenorientierte Dokumentationen der Softwaresysteme verwendet. Dies kommt daher, dass die Untersuchung einer solchen Verletzung hĂ€ufig erfordert, DatenflĂŒssen zu folgen. Datenflussdiagramme (DFDs) werden gerne genutzt, um Sicherheit im Allgemeinen und Vertraulichkeit im Speziellen zu untersuchen. Allerdings sind reine DFDs noch nicht ausreichend, um darauf aufbauende Analysen zu formalisieren und zu automatisieren. Stattdessen mĂŒssen DFDs oder auch andere Architekturbeschreibungssprachen (ADLs) erweitert werden, um die zur Untersuchung von Vertraulichkeit notwendigen Informationen reprĂ€sentieren zu können. Solche Erweiterungen unterstĂŒtzen hĂ€ufig nur Vertraulichkeitsanforderungen fĂŒr genau einen Vertraulichkeitsmechanismus wie etwa Zugriffskontrolle. Eine Kombination von Mechanismen unterstĂŒtzen solche auf einen einzigen Zweck fokussierten Erweiterungen nicht, was deren AusdrucksmĂ€chtigkeit einschrĂ€nkt. Möchte ein Softwarearchitekt oder eine Softwarearchitektin den eingesetzten Vertraulichkeitsmechanismus wechseln, muss er oder sie auch die ADL wechseln, was mit hohem Aufwand fĂŒr das erneute Modellieren der Softwarearchitektur einhergeht. DarĂŒber hinaus bieten viele AnalyseansĂ€tze keine Integration in bestehende ADLs und Entwicklungsprozesse. Ein systematischer Einsatz eines solchen Ansatzes wird dadurch deutlich erschwert. Existierende, datenorientierte AnsĂ€tze bauen entweder stark auf manuelle AktivitĂ€ten und hohe Expertise oder unterstĂŒtzen nicht die gleichzeitige ReprĂ€sentation von Zugriffs- und Informationsflusskontrolle, sowie VerschlĂŒsselung im selben Artefakt zur Architekturspezifikation. Weil die genannten Vertraulichkeitsmechanismen am verbreitetsten sind, ist es wahrscheinlich, dass Softwarearchitekten und Softwarearchitektinnen an der Nutzung all dieser Mechanismen interessiert sind. Die erwĂ€hnten, manuellen TĂ€tigkeiten umfassen u.a. die Identifikation von Verletzungen mittels Inspektionen und das Nachverfolgen von Daten durch das System. Beide TĂ€tigkeiten benötigen ein betrĂ€chtliches Maß an Erfahrung im Bereich Vertraulichkeit. Wir adressieren in dieser Arbeit die zuvor genannten Probleme mittels vier BeitrĂ€gen: Zuerst prĂ€sentieren wir eine Erweiterung der DFD-Syntax, durch die die zur Untersuchung von Zugriffs- und Informationsflusskontrolle, sowie VerschlĂŒsselung notwendigen Informationen mittels Eigenschaften und Verhaltensbeschreibungen innerhalb des selben Artefakts zur Architekturspezifikation ausgedrĂŒckt werden können. Zweitens stellen wir eine Semantik dieser erweiterten DFD-Syntax vor, die das Verhalten von DFDs ĂŒber die Ausbreitung von Attributen (engl.: label propagation) formalisiert und damit eine automatisierte RĂŒckverfolgung von Daten ermöglicht. Drittens prĂ€sentieren wir Analysedefinitionen, die basierend auf der DFD-Syntax und -Semantik Verletzungen von Vertraulichkeitsanforderungen identifizieren kann. Die unterstĂŒtzten Vertraulichkeitsanforderungen decken die wichtigsten Varianten von Zugriffs- und Informationsflusskontrolle, sowie VerschlĂŒsselung ab. Viertens stellen wir einen Leitfaden zur Integration des Rahmenwerks fĂŒr datenorientierte Analysen in bestehende ADLs und deren zugehörige Entwicklungsprozesse vor. Das Rahmenwerk besteht aus den vorherigen drei BeitrĂ€gen. Die Validierung der AusdrucksmĂ€chtigkeit, der ErgebnisqualitĂ€t und des Modellierungsaufwands unserer BeitrĂ€ge erfolgt fallstudienbasiert auf siebzehn Fallstudiensystemen. Die Fallstudiensysteme stammen grĂ¶ĂŸtenteils aus verwandten Arbeiten und decken fĂŒnf Arten von Zugriffskontrollanforderungen, vier Arten von Informationsflussanforderungen, zwei Arten von VerschlĂŒsselung und Anforderungen einer Kombination beider Vertraulichkeitsmechanismen ab. Wir haben die AusdrucksmĂ€chtigkeit der DFD-Syntax, sowie der mittels des Integrationsleitfadens erstellten ADLs validiert und konnten alle außer ein Fallstudiensystem reprĂ€sentieren. Wir konnten außerdem die Vertraulichkeitsanforderungen von sechzehn Fallstudiensystemen mittels unserer Analysedefinitionen reprĂ€sentieren. Die DFD-basierten, sowie die ADL-basierten Analysen lieferten die erwarteten Ergebnisse, was eine hohe ErgebnisqualitĂ€t bedeutet. Den Modellierungsaufwand in den erweiterten ADLs validierten wir sowohl fĂŒr das HinzufĂŒgen, als auch das Wechseln eines Vertraulichkeitsmechanismus bei einer bestehenden Softwarearchitektur. In beiden Validierungen konnten wir zeigen, dass die ADL-Integrationen Modellierungsaufwand einsparen, indem betrĂ€chtliche Teile bestehender Softwarearchitekturen wiederverwendet werden können. Von unseren BeitrĂ€gen profitieren Softwarearchitekten durch gesteigerte FlexibilitĂ€t bei der Auswahl von Vertraulichkeitsmechanismen, sowie beim Wechsel zwischen diesen Mechanismen. Die frĂŒhe Identifikation von Vertraulichkeitsverletzungen verringert darĂŒber hinaus den Aufwand zum Beheben der zugrundeliegenden Probleme

    Architectural Data Flow Analysis for Detecting Violations of Confidentiality Requirements

    Get PDF
    Software vendors must consider confidentiality especially while creating software architectures because decisions made here are hard to change later. Our approach represents and analyzes data flows in software architectures. Systems specify data flows and confidentiality requirements specify limitations of data flows. Software architects use detected violations of these limitations to improve the system. We demonstrate how to integrate our approach into existing development processes

    Dataflow computers: a tutorial and survey

    Get PDF
    Journal ArticleThe demand for very high performance computer has encouraged some researchers in the computer science field to consider alternatives to the conventional notions of program and computer organization. The dataflow computer is one attempt to form a new collection of consistent systems ideas to improve both computer performance and to alleviate the software design problems induced by the construction of highly concurrent programs

    Dataflow development of medium-grained parallel software

    Get PDF
    PhD ThesisIn the 1980s, multiple-processor computers (multiprocessors) based on conven- tional processing elements emerged as a popular solution to the continuing demand for ever-greater computing power. These machines offer a general-purpose parallel processing platform on which the size of program units which can be efficiently executed in parallel - the "grain size" - is smaller than that offered by distributed computing environments, though greater than that of some more specialised architectures. However, programming to exploit this medium-grained parallelism remains difficult. Concurrent execution is inherently complex, yet there is a lack of programming tools to support parallel programming activities such as program design, implementation, debugging, performance tuning and so on. In helping to manage complexity in sequential programming, visual tools have often been used to great effect, which suggests one approach towards the goal of making parallel programming less difficult. This thesis examines the possibilities which the dataflow paradigm has to offer as the basis for a set of visual parallel programming tools, and presents a dataflow notation designed as a framework for medium-grained parallel programming. The implementation of this notation as a programming language is discussed, and its suitability for the medium-grained level is examinedScience and Engineering Research Council of Great Britain EC ERASMUS schem

    An Overview of Language Support for Modular Event-driven Programming

    Get PDF
    Nowadays, event processing is becoming the backbone of many applications. Therefore, it is necessary to provide suitable abstractions to properly modularize the concerns that appear in event-driven applications. We identify four categories of languages that support event-driven programming, and identify their shortcomings in achieving modularity in the implementation of applications. We propose gummy modules and their implementation in the GummyJ language as a solution. Gummy modules have well-defined event-based interfaces, and can have a primitive or a composite structure. Composite gummy modules are means to group a set of correlated event processing concerns and restrict the visibility of events among them. We provide an example usage of gummy modules, and discuss their event processing semantics

    Implicit Incremental Model Analyses and Transformations

    Get PDF
    When models of a system change, analyses based on them have to be reevaluated in order for the results to stay meaningful. In many cases, the time to get updated analysis results is critical. This thesis proposes multiple, combinable approaches and a new formalism based on category theory for implicitly incremental model analyses and transformations. The advantages of the implementation are validated using seven case studies, partially drawn from the Transformation Tool Contest (TTC)

    The instruction of systolic array (ISA) and simulation of parallel algorithms

    Get PDF
    Systolic arrays have proved to be well suited for Very Large Scale Integrated technology (VLSI) since they: -Consist of a regular network of simple processing cells, -Use local communication between the processing cells only, -Exploit a maximal degree of parallelism. However, systolic arrays have one main disadvantage compared with other parallel computer architectures: they are special purpose architectures only capable of executing one algorithm, e.g., a systolic array designed for sorting cannot be used to form matrix multiplication. Several approaches have been made to make systolic arrays more flexible, in order to be able to handle different problems on a single systolic array. In this thesis an alternative concept to a VLSI-architecture the Soft-Systolic Simulation System (SSSS), is introduced and developed as a working model of virtual machine with the power to simulate hard systolic arrays and more general forms of concurrency such as the SIMD and MIMD models of computation. The virtual machine includes a processing element consisting of a soft-systolic processor implemented in the virtual.machine language. The processing element considered here was a very general element which allows the choice of a wide range of arithmetic and logical operators and allows the simulation of a wide class of algorithms but in principle extra processing cells can be added making a library and this library be tailored to individual needs. The virtual machine chosen for this implementation is the Instruction Systolic Array (ISA). The ISA has a number of interesting features, firstly it has been used to simulate all SIMD algorithms and many MIMD algorithms by a simple program transformation technique, further, the ISA can also simulate the so-called wavefront processor algorithms, as well as many hard systolic algorithms. The ISA removes the need for the broadcasting of data which is a feature of SIMD algorithms (limiting the size of the machine and its cycle time) and also presents a fairly simple communication structure for MIMD algorithms. The model of systolic computation developed from the VLSI approach to systolic arrays is such that the processing surface is fixed, as are the processing elements or cells by virtue of their being embedded in the processing surface. The VLSI approach therefore freezes instructions and hardware relative to the movement of data with the virtual machine and softsystolic programming retaining the constructions of VLSI for array design features such as regularity, simplicity and local communication, allowing the movement of instructions with respect to data. Data can be frozen into the structure with instructions moving systolically. Alternatively both the data and instructions can move systolically around the virtual processors, (which are deemed fixed relative to the underlying architecture). The ISA is implemented in OCCAM programs whose execution and output implicitly confirm the correctness of the design. The soft-systolic preparation comprises of the usual operating system facilities for the creation and modification of files during the development of new programs and ISA processor elements. We allow any concurrent high level language to be used to model the softsystolic program. Consequently the Replicating Instruction Systolic Array Language (RI SAL) was devised to provide a very primitive program environment to the ISA but adequate for testing. RI SAL accepts instructions in an assembler-like form, but is fairly permissive about the format of statements, subject of course to syntax. The RI SAL compiler is adopted to transform the soft-systolic program description (RISAL) into a form suitable for the virtual machine (simulating the algorithm) to run. Finally we conclude that the principles mentioned here can form the basis for a soft-systolic simulator using an orthogonally connected mesh of processors. The wide range of algorithms which the ISA can simulate make it suitable for a virtual simulating grid
    • 

    corecore