9 research outputs found

    A SURVEY ON TRADITIONAL PLATFORMS AND NEW TRENDS IN PARALLEL COMPUTATION

    Get PDF
    The information processing is in continuous progress. High Performance Computing is now a trend. The Parallel Computing is a synonymous phrase for High Performance Computing. Parallel Computing is the main field of information processing in our age. Both, the hardware systems and the software platforms are developing very fast to support the simple, rational and easy parallel data processing and programming. This paper shows an overview of issues and improvements in parallel processing. This paper deals with likewise the qualities and the favorable circumstances of distinctive stages for parallelism. Herewith are dealt with different architectures, innovations for parallelism and comparisons of results of parallel processing. The main question is to introduce the Dataflow model and some solid illustrations of Dataflow arrangement. The paper compares the traditional control flow parallel platform in contrasts to the data flow innovation

    A SURVEY ON TRADITIONAL PLATFORMS AND NEW TRENDS IN PARALLEL COMPUTATION

    Get PDF
    The information processing is in continuous progress. High Performance Computing is now a trend. The Parallel Computing is a synonymous phrase for High Performance Computing. Parallel Computing is the main field of information processing in our age. Both, the hardware systems and the software platforms are developing very fast to support the simple, rational and easy parallel data processing and programming. This paper shows an overview of issues and improvements in parallel processing. This paper deals with likewise the qualities and the favorable circumstances of distinctive stages for parallelism. Herewith are dealt with different architectures, innovations for parallelism and comparisons of results of parallel processing. The main question is to introduce the Dataflow model and some solid illustrations of Dataflow arrangement. The paper compares the traditional control flow parallel platform in contrasts to the data flow innovation

    Cognitive Sensor Platform

    Get PDF
    This paper describes a platform that is used to build embedded sensor systems for low energy implantable applications. One of the key characteristics of the platform is the ability to reason about the environment and dynamically modify the operational parameters of the system. Additionally the platform provides to ability to compose application specific sensor systems using a novel computational element that directly supports a synchronous-dataflow (SDF) programming paradigm. Cognition in the context of a sensor platform is defined as the “process of knowing, including aspects of awareness, perception, reasoning, and judgment”.DOI:http://dx.doi.org/10.11591/ijece.v4i4.568

    The Dataflow Computational Model And Its Evolution

    Get PDF
    Το υπολογιστικό μοντέλο dataflow είναι ένα εναλλακτικό του von-Neumann. Τα κυριότερα χαρακτηριστικά του είναι ο ασύγχρονος προγραμματισμός εργασιών και το ότι επιτρέπει μαζική παραλληλία. Αυτή η πτυχιακή είναι μία μελέτη αυτού του μοντέλου, καθώς και μερικών υβριδικών μοντέλων, που βρίσκονται ανάμεσα στο αρχικό μοντέλο dataflow και στο von-Neumann. Τέλος, υπάρχουν αναφορές σε μερικές αρχές του dataflow, οι οποίες έχουν υιοθετηθεί σε συμβατικές μηχανές, γλώσσες προγραμματισμού και συστήματα κατανεμημένων υπολογισμών.The dataflow computational model is an alternative to the von-Neumann model. Its most significant aspects are, that it is based on asynchronous instructions scheduling and exposes massive parallelism. This thesis is a review of the dataflow computational model, as well as of some hybrid models, which lie between the pure dataflow and the von Neumann model. Additionally, there are some references to dataflow principles, that are or are being adopted by conventional machines, programming languages and distributed computing systems

    Developments in Dataflow Programming

    Get PDF
    Dataflow has historically been motivated either by parallelism or programmability or some combination of the two. This work, rather than being directed primarily at parallelism or programmability, is instead aimed at maximising the overall utility to the programmer of the system at large. This means that it aims to result in a system in which it is easy to create well-constructed, flexible programs that comply with the principles of software engineering and architecture, but also that the proposed system should be capable at performing practical real-life tasks and should be as widely applicable as can be achieved. With those aims in mind, this project has four goals: * to argue for a unified global dataflow coordination system, extensible to be able to accommodate components of any form that may exist now or in the future; * to establish a link between the design of such a system and the principles of software engineering and architecture; * to design a dataflow coordination system based on those principles, aiming where possible to embed them in the design so that they become easy or unthinking for programmers to apply; and * to implement and test components of the proposed system, using it to build a set of three sample algorithms. Taking the best ideas that have been proposed in dataflow programming in the past --- those that most effectively embed the principles of software engineering --- and extending them with new proposals where necessary, a collection of interactions and functionalities is proposed, including a novel way of using partial evaluation of functions and data dimensionality to represent iteration in an acyclic graph. The proposed design was implemented as far as necessary to construct three test algorithms: calculating a factorial, generating terms of the Fibonacci sequence and performing a merge-sort. The implementation was successful in representing iteration in acyclic dataflow, and the test algorithms generated correct results, limited only by the numerical representation capabilities of the underlying language. Testing and working with the implemented system revealed the importance to usability of the system being visual, interactive and, in a distributed environment, always-available. Proposed further work falls into three categories: writing a full specification (in particular, defining the interfaces by which components will interact); developing new features to extend the functionality; and further developing the test implementation. The conclusion summarises the vision of a unified global dataflow coordination system and makes an appeal for cooperation on its development as an open, non-profit dataflow system run for the good of its community, rather than allowing a proliferation of competing systems run for commercial gain
    corecore