10 research outputs found
An Inherently Parallel Large Grained Data Flow Environment
A parallel programming environment based on data flow is described. Programming in the environment involves use with an interactive graphic editor which facilitates the construction of a program graph consisting of modules, ports, paths and triggers. Parallelism is inherent since data presence allows many modules to execute concurrently. The graph is executed directly without transformation to traditional representations. The environment supports programming at a very high level as opposed to parallelism at the individual instruction level
The Task distribution preprocessor (TDP)
This document describes a software processor used to generate portable and efficient multiprocess programs that run on UNIX operating systems. The processor is designed to be a convenient method for converting single process C programs into distributed multiprocess C programs. Another goal is to have the processor used as a back end or platform for multitasking languages and code generators. Efficiency is targeted toward multiprocessor systems that provide facilities for sharing physical memory between processes running on separate CPUs. Portability is achieved with the use of the highly portable UNIX operating system and its companion C language. The C language is used as both the input and output language for the processor. Use of C as an object language gives portability to the generated multitask program. Using C as an input language, simplifies the interface with multitasking languages and code generators as well as minimizing changes necessary when converting C language single task pro grams to multitask programs. Initial implementation of the Task Distribution Preprocessor will generate C code that can be compiled and run on any UNIX system that provides message passing facilities
Programming Languages for Distributed Computing Systems
When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper. We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give the flavor of each. These examples include languages based on message passing, rendezvous, remote procedure call, objects, and atomic transactions, as well as functional languages, logic languages, and distributed data structure languages. The paper concludes with a comprehensive bibliography listing over 200 papers on nearly 100 distributed programming languages
Grasp--a language to facilitate the synthesis of parallel programs
In the context of this thesis, the name Grasp subsumes three distinct but highly interrelated projects. First of all, Grasp is a programming language that allows the user to define properties of graph-theoretic objects by using high-level nonprocedural descriptions called specifications. Second, Grasp is a translator that converts specifications to standard sequential C functions. Finally, Grasp is a model of computation that has been left largely uninvestigated despite possessing several advantageous properties. Each of these aspects of Grasp is described in a contextually clean and detailed manner, but in the end the theoretical aspects of Grasp are espoused over the formal and practical aspects
Dataflow development of medium-grained parallel software
PhD ThesisIn the 1980s, multiple-processor computers (multiprocessors) based on conven-
tional processing elements emerged as a popular solution to the continuing demand
for ever-greater computing power. These machines offer a general-purpose parallel
processing platform on which the size of program units which can be efficiently
executed in parallel - the "grain size" - is smaller than that offered by distributed
computing environments, though greater than that of some more specialised
architectures. However, programming to exploit this medium-grained parallelism
remains difficult. Concurrent execution is inherently complex, yet there is a lack of
programming tools to support parallel programming activities such as program
design, implementation, debugging, performance tuning and so on.
In helping to manage complexity in sequential programming, visual tools have
often been used to great effect, which suggests one approach towards the goal of
making parallel programming less difficult.
This thesis examines the possibilities which the dataflow paradigm has to offer
as the basis for a set of visual parallel programming tools, and presents a dataflow
notation designed as a framework for medium-grained parallel programming. The
implementation of this notation as a programming language is discussed, and its
suitability for the medium-grained level is examinedScience and Engineering Research Council of Great Britain
EC ERASMUS schem
Linda Talk : suporte distribuido a programação concorrente orientada a objetos
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro TecnologicoProblemas complexos são geralmente decompostos em subproblemas menores, que sejam tratáveis mais facilmente. O mesmo vale para sistemas de computação, os quais contam com uma gama rica de abordagens de decomposição (funcional, procedural, etc). Dentre estas, a decomposição orientada a objetos tem ganho cada vez mais espaço, dada sua riqueza e poder na modelagem e implementação de sistemas informáticos. A possibilidade de programar sistemas multiprocessadores e sistemas em redes de computadores, por outro lado, favoreceu as linhas de programação paralela/concorrente/distribuída. Contudo, se de um lado a orientação a objetos clássica promove uma modelagem natural de entidades no domínio do problema, por outro lado ela falha na tentativa de expressar atividades concorrentes/paralelas. Já sistemas que suportam a noção de processos paralelos, tais como Occam, Conic, Ada, etc, permitem preencher esta lacuna. Contudo, o poder de modelagem e abstração de entidades fica bastante limitado neste tipo de abordagem, levandogeralmente à produção de sistemas difíceis de adaptar, manter e recusar. Modelos com suporte à programação paralela orientada a objetos, tais como Emerald, ConcurrentSmalltalk, Act-1, ABCL/1, etc. surgem na tentativa de unificar objetos no sentido clássico de orientação a objetos com a noção de processos paralelos e comunicantes. Porém, tanto nesta abordagem quanto na programação orientada a objetos clássica e alguns modelos de programação concorrente/paralela/distribuída, a metófora de interação entre objetos/processo é a mesma: troca de mensagens. Troca de mensagens conforme presente em sistemas concorrentes orientados a objetos apresentam diversas fraquezas no que toca a implementação, manutenção e reusabilidade de sistemas distribuídos. Nossa proposta busca incorporar a uma linguagem orientada a objetos clássica - Smalltalk - um modelo que suporte a programação paralela/distribuída com um maior grau de flexibilidade. Este modelo é o de Espaço de Tuplas, de Linda. Através de um pequeno conjunto de primitivas, tem-se um modelo simples de criação e coordenação de processos ortogonal à linguagem em que se insere o modelo (Smalltalk, no caso). Através do uso extensivo do modelo, acreditamos ser possível a construção de sistemas realmente distribuídos e orientados a objetos com um maior grau de flexibilidade em sua implementação, reusabilidade e manutenção
Modelo de concurrencia de ADA: implementación y sus implicaciones en el interfaz con el entorno
El principal problema que impide actualmente una mayor utilización de las
máquinas paralelas es la falta de herramientas de programación que permitan generar
programas transportables a máquinas con diferentes prestaciones. En este trabajo se
ha estudiado si los lenguajes con paralelismo explícito cumplen este requisito y son,
por lo tanto, adecuados para programar este tipo de máquinas. El exceso de paralelismo,
esto es, el uso de mayor paralelismo en el programa que el proporcionado por la
máquina para esconder la latencia en la comunicación, se presenta en este trabajo
como una solución a los problemas de eficiencia de los programas con paralelismo
explícito cuando se ejecutan en máquinas que no tienen una granularidad adecuada.
Con esta técnica, por lo tanto, los programas escritos con estos lenguajes pueden
transportarse con eficiencia a diferentes máquinas.
Para llevar a cabo el estudio de los lenguajes con paralelismo explícito, se ha
desarrollado un modelo abstracto de paralelismo, en el cual un sistema está formado
por una jerarquía de máquinas virtuales paralelas. Este modelo permite realizar un
análisis genérico de la implementación de este tipo de lenguajes, ya sea sobre una
máquina con sistema operativo o directamente sobre la máquina física.
Este análisis genérico se ha aplicado a un lenguaje de este tipo, el lenguaje
Ada. Se han estudiado las características específicas de Ada que pueden influir en la
implementación eficiente del lenguaje, analizando también la propuesta de modificación
del lenguaje correspondiente al proceso de revisión Ada 9X.
Dentro del marco del modelo de paralelismo, se analiza también la problemática
específica de las implementaciones del lenguaje sobre el sistema operativo. En
este tipo de implementaciones, las interacciones de un programa con el entorno externo
pueden causar ciertos problemas, como el bloqueo del proceso correspondiente
del sistema operativo, que disminuyen el rendimiento del programa. Se analizan estos
problemas y se proponen soluciones a los mismos. Se desarrolla en profundidad un
ejemplo de este tipo de problemas: El acceso al estándar gráfico GKS desde Ada.---ABSTRACT---The major obstacle to the widespread utilization of the parallel machines is the
lack of programming tools allowing the development of software portable between
machines with different performance.
This dissertation analyzes whether languages with explicit parallelism fulfil this
requirement. The approach of using programs with more parallelism than available
on the machine (parallel slackness) is presented. This technique can solve the efficiency
problems appearing in the execution of programs with explicit parallelism
over machines with a too coarse granularity. Therefore, with this approach programs
can run efficiently on different machines.
A new abstract model of parallelism allowing the generic study of the implementation
of languages with explicit parallelism is developed. In this model, a parallel
system is described by a hierarchy of parallel virtual machines.
This generic analysis is applied to Ada language. Ada specific features with
problematic implementation are identified and analyzed. The change proposals to
Ada language in the frame of Ada 9X revisión process are also analyzed.
The specific problematic of the language implementation on top of the operating
system is studied under the scope of the parallelism model. With this kind of
implementation, program interactions with extemal environments can lead to problems,
like the blocking of the corresponding operating system process, decreasing
the program execution performance. A practical example of this kind of problems,
the access to GKS (Graphic Kernel System) from Ada programs, is analyzed and the
implemented solution is described