41 research outputs found

    Compiling machine-independent parallel programs

    Get PDF

    Aplicaciones del cómputo de altas prestaciones

    Get PDF
    Debido al avance de las arquitecturas multiprocesador y la importancia que tiene actualmente el Cómputo de Altas Prestaciones (HPC) se vió la necesidad de proporcionar a los alumnos, docentes investigadores de la provincia de Jujuy y a la región noroeste un lugar de referencia donde se pueda aprovechar los servicios para aplicaciones que requieran una gran capacidad de procesamiento. Es así que se inicia en el año 2008 el proyecto de Investigación “APLICACIONES DEL CÓMPUTO DE ALTAS PRESTACIONES” formado por jóvenes docentes investigadores de la Facultad de Ingeniería de la UNJu con el objetivo de profundizar en el área del cómputo de altas prestaciones, incorporar HPC en la currícula de las carreras informáticas y colaborar con otros grupos de investigación que requieran HPC.Eje: Procesamiento Distribuido y ParaleloRed de Universidades con Carreras en Informática (RedUNCI

    Aplicaciones del cómputo de altas prestaciones

    Get PDF
    Debido al avance de las arquitecturas multiprocesador y la importancia que tiene actualmente el Cómputo de Altas Prestaciones (HPC) se vió la necesidad de proporcionar a los alumnos, docentes investigadores de la provincia de Jujuy y a la región noroeste un lugar de referencia donde se pueda aprovechar los servicios para aplicaciones que requieran una gran capacidad de procesamiento. Es así que se inicia en el año 2008 el proyecto de Investigación “APLICACIONES DEL CÓMPUTO DE ALTAS PRESTACIONES” formado por jóvenes docentes investigadores de la Facultad de Ingeniería de la UNJu con el objetivo de profundizar en el área del cómputo de altas prestaciones, incorporar HPC en la currícula de las carreras informáticas y colaborar con otros grupos de investigación que requieran HPC.Eje: Procesamiento Distribuido y ParaleloRed de Universidades con Carreras en Informática (RedUNCI

    Relating data—parallelism and (and—) parallelism in logic programs

    Get PDF
    Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages. Traditional (and-) parallel models offer generality, being able to exploit parallelism in a large class of programs (including that exploited by data parallelism techniques). Data parallelism techniques on the other hand offer increased performance for a restricted class of programs. The thesis of this paper is that these two forms of parallelism are not fundamentally different and that relating them opens the possibility of obtaining the advantages of both within the same system. Some relevant issues are discussed and solutions proposed. The discussion is illustrated through visualizations of actual parallel executions implementing the ideas proposed

    Flexible language constructs for large parallel programs

    Get PDF
    The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given

    An Evaluation of Adaptive Execution of OpenMP Task Parallel Programs

    Get PDF
    We present a system that allows task parallel OpenMP pro grams to execute on a network of workstations (NOW) with a variable number of nodes Such adaptivity, generally called adaptive parallelism, is important in a multi-user NOW environment, enabling the system to expand the computation onto idle nodes or withdraw from otherwise occupied nodes. We focus on task parallel applications in this paper, but the system also lets data parallel applications run adaptively. When an adaptation is requested, we let all processes complete theircurrent tasks, then the system executes an extra OpenMP join-fork sequence not present in the application code. Here, the system can change the number of nodes without involving the application, as processes do not have a compute-relevant private process state. We show that the costs of adaptations is low, and we explain why the costs are lower for task parallel applications than for data parallel applications
    corecore