5 research outputs found

    An Inherently Parallel Large Grained Data Flow Environment

    Get PDF
    A parallel programming environment based on data flow is described. Programming in the environment involves use with an interactive graphic editor which facilitates the construction of a program graph consisting of modules, ports, paths and triggers. Parallelism is inherent since data presence allows many modules to execute concurrently. The graph is executed directly without transformation to traditional representations. The environment supports programming at a very high level as opposed to parallelism at the individual instruction level

    Aspects of parallel processing and control engineering

    Get PDF
    The concept of parallel processing is not a new one, but the application of it to control engineering tasks is a relatively recent development, made possible by contemporary hardware and software innovation. It has long been accepted that, if properly orchestrated several processors/CPUs when combined can form a powerful processing entity. What prevented this from being implemented in commercial systems was the adequacy of the microprocessor for most tasks and hence the expense of a multi-processor system was not justified. With the advent of high demand systems, such as highly fault tolerant flight controllers and fast robotic controllers, parallel processing became a viable option. Nonetheless, the software interfacing of control laws onto parallel systems has remained somewhat of an impasse. There are no software compilers at present which allow a programmer to specify a control law in pure mathematical terminology and then decompose it into a flow diagram of concurrent processes which may then be implemented on, say, a target Transputer system, liiere are several parallel programming languages with which a programmer can generate parallel processes but, generally, in order to realise a control algorithm in parallel the programmer must have intimate knowledge of the algorithm. Therefore, efficiency is based on the ability of the programmer to recognise inherent parellelism. Some attempts are being made to create intelligent partition and scheduling compilers but this usually means significantly extra overheads on the multiprocessor system. In the absence of an automated technique control algorithms must be decomposed by inspection. The research presented in this thesis is founded upon the application of both parallel and pipelining techniques to particular control strategies. Parallelism is tackled objectively and by creating a tailored terminology it is defined mathematically, and consequently related concepts, such as bounded parallelism and algorithm speedup, are also quantified in a numerical sense. A pipelined explicit Self Tuning Regulator (STR) controller is developed and tested on systems of different order. Under the governance of the parallelism terminology the effectiveness of the parallel STR is evaluated and numerically quantified in terms of relevant performance indices. A parallel simulator is presented for the Puma 560 robotic manipulator. By exploiting parallelism and pipelinability in the robot model a significant increase in execution speed is achieved over the sequential model. The use of Transputers is examined and graphical results obtained for several performance indices, including speedup, processor efficiency and bounded parallelism. By the same analytical technique a parallel computed torque feedforward controller incorporating proportional derivative feedback control for the Puma 560 manipulator is developed and appraised. The performance of a Transputer system in hosting the controller is graphically analysed and as in the case of the parallel simulator the more important performance indices are examined under both optimal conditions and conditions of varying hardware constraints

    Context flow architecture

    Get PDF

    Procesamiento paralelo : Balance de carga din谩mico en algoritmo de sorting

    Get PDF
    Algunas t茅cnicas de sorting intentan balancear la carga mediante un muestreo inicial de los datos a ordenar y una distribuci贸n de los mismos de acuerdo a pivots. Otras redistribuyen listas parcialmente ordenadas de modo que cada procesador almacene un n煤mero aproximadamente igual de claves, y todos tomen parte del proceso de merge durante la ejecuci贸n. Esta Tesis presenta un nuevo m茅todo que balancea din谩micamente la carga basado en un enfoque diferente, buscando realizar una distribuci贸n del trabajo utilizando un estimador que permita predecir la carga de trabajo pendiente. El m茅todo propuesto es una variante de Sorting by Merging Paralelo, esto es, una t茅cnica basada en comparaci贸n. Las ordenaciones en los bloques se realizan mediante el m茅todo de Burbuja o Bubble Sort con centinela. En este caso, el trabajo a realizar -en t茅rminos de comparaciones e intercambios- se encuentra afectada por el grado de desorden de los datos. Se estudi贸 la evoluci贸n de la cantidad de trabajo en cada iteraci贸n del algoritmo para diferentes tipos de secuencias de entrada, n datos con valores de a n sin repetici贸n, datos al azar con distribuci贸n normal, observ谩ndose que el trabajo disminuye en cada iteraci贸n. Esto se utiliz贸 para obtener una estimaci贸n del trabajo restante esperado a partir de una iteraci贸n determinada, y basarse en el mismo para corregir la distribuci贸n de la carga. Con esta idea, el m茅toEs revisado por: http://sedici.unlp.edu.ar/handle/10915/9500Facultad de Ciencias Exacta
    corecore