18 research outputs found

    GRID superscalar: a programming model for the Grid

    Get PDF
    Durant els darrers anys el Grid ha sorgit com una nova plataforma per la computació distribuïda. La tecnologia Gris permet unir diferents recursos de diferents dominis administratius i formar un superordinador virtual amb tots ells. Molts grups de recerca han dedicat els seus esforços a desenvolupar un conjunt de serveis bàsics per oferir un middleware de Grid: una capa que permet l'ús del Grid. De tota manera, utilitzar aquests serveis no és una tasca fácil per molts usuaris finals, cosa que empitjora si l'expertesa d'aquests usuaris no està relacionada amb la informàtica.Això té una influència negativa a l'hora de que la comunitat científica adopti la tecnologia Grid. Es veu com una tecnologia potent però molt difícil de fer servir. Per facilitar l'ús del Grid és necessària una capa extra que amagui la complexitat d'aquest i permeti als usuaris programar o portar les seves aplicacions de manera senzilla.Existeixen moltes propostes d'eines de programació pel Grid. En aquesta tesi fem un resum d'algunes d'elles, i podem veure que existeixen eines conscients i no-conscients del Grid (es programen especificant o no els detalls del Grid, respectivament). A més, molt poques d'aquestes eines poden explotar el paral·lelisme implícit de l'aplicació, i en la majoria d'elles, l'usuari ha de definir aquest paral·lelisme de manera explícita. Una altra característica que considerem important és si es basen en llenguatges de programació molt populars (com C++ o Java), cosa que facilita l'adopció per part dels usuaris finals.En aquesta tesi, el nostre objectiu principal ha estat crear un model de programació pel Grid basat en la programació seqüencial i els llenguatges més coneguts de la programació imperativa, capaç d'explotar el paral·lelisme implícit de les aplicacions i d'accelerar-les fent servir els recursos del Grid de manera concurrent. A més, com el Grid és de naturalesa distribuïda, heterogènia i dinàmica i degut també a que el nombre de recursos que pot formar un Grid pot ser molt gran, la probabilitat de que es produeixi una errada durant l'execució d'una aplicació és elevada. Per tant, un altre dels nostres objectius ha estat tractar qualsevol tipus d'error que pugui sorgir durant l'execució d'una aplicació de manera automàtica (ja siguin errors relacionats amb l'aplicació o amb el Grid). GRID superscalar (GRIDSs), la principal contribució d'aquesta tesi, és un model de programació que assoleix elsobjectius mencionats proporcionant una interfície molt petita i simple i un entorn d'execució que és capaç d'executar en paral·lel el codi proporcionat fent servir el Grid. La nostra interfície de programació permet a un usuari programar una aplicació no-conscient del Grid, amb llenguatges imperatius coneguts i populars (com C/C++, Java, Perl o Shell script) i de manera seqüencial, per tant dóna un pas important per ajudar als usuaris a adoptar la tecnologia Grid.Hem aplicat el nostre coneixement de l'arquitectura de computadors i el disseny de microprocessadors a l'entorn d'execució de GRIDSs. Tal com es fa a un processador superescalar, l'entorn d'execució de GRIDSs és capaç de realitzar un anàlisi de dependències entre les tasques que formen l'aplicació, i d'aplicar tècniques de renombrament per incrementar el seu paral·lelisme. GRIDSs genera automàticament a partir del codi principal de l'usuari un graf que descriu les dependències de dades en l'aplicació. També presentem casos d'ús reals del model de programació en els camps de la química computacional i la bioinformàtica, que demostren que els nostres objectius han estat assolits.Finalment, hem estudiat l'aplicació de diferents tècniques per detectar i tractar fallades: checkpoint, reintent i replicació de tasques. La nostra proposta és proporcionar un entorn capaç de tractar qualsevol tipus d'errors, de manera transparent a l'usuari sempre que sigui possible. El principal avantatge d'implementar aquests mecanismos al nivell del model de programació és que el coneixement a nivell de l'aplicació pot ser explotat per crear dinàmicament una estratègia de tolerància a fallades per cada aplicació, i evitar introduir sobrecàrrega en entorns lliures d'errors.During last years, the Grid has emerged as a new platform for distributed computing. The Grid technology allows joining different resources from different administrative domains and forming a virtual supercomputer with all of them.Many research groups have dedicated their efforts to develop a set of basic services to offer a Grid middleware: a layer that enables the use of the Grid. Anyway, using these services is not an easy task for many end users, even more if their expertise is not related to computer science. This has a negative influence in the adoption of the Grid technology by the scientific community. They see it as a powerful technology but very difficult to exploit. In order to ease the way the Grid must be used, there is a need for an extra layer which hides all the complexity of the Grid, and allows users to program or port their applications in an easy way.There has been many proposals of programming tools for the Grid. In this thesis we give an overview on some of them, and we can see that there exist both Grid-aware and Grid-unaware environments (programmed with or without specifying details of the Grid respectively). Besides, very few existing tools can exploit the implicit parallelism of the application and in the majority of them, the user must define the parallelism explicitly. Another important feature we consider is if they are based in widely used programming languages (as C++ or Java), so the adoption is easier for end users.In this thesis, our main objective has been to create a programming model for the Grid based on sequential programming and well-known imperative programming languages, able to exploit the implicit parallelism of applications and to speed them up by using the Grid resources concurrently. Moreover, because the Grid has a distributed, heterogeneous and dynamic nature and also because the number of resources that form a Grid can be very big, the probability that an error arises during an application's execution is big. Thus, another of our objectives has been to automatically deal with any type of errors which may arise during the execution of the application (application related or Grid related).GRID superscalar (GRIDSs), the main contribution of this thesis, is a programming model that achieves these mentioned objectives by providing a very small and simple interface and a runtime that is able to execute in parallel the code provided using the Grid. Our programming interface allows a user to program a Grid-unaware application with already known and popular imperative languages (such as C/C++, Java, Perl or Shell script) and in a sequential fashion, therefore giving an important step to assist end users in the adoption of the Grid technology.We have applied our knowledge from computer architecture and microprocessor design to the GRIDSs runtime. As it is done in a superscalar processor, the GRIDSs runtime system is able to perform a data dependence analysis between the tasks that form an application, and to apply renaming techniques in order to increase its parallelism. GRIDSs generates automatically from user's main code a graph describing the data dependencies in the application.We present real use cases of the programming model in the fields of computational chemistry and bioinformatics, which demonstrate that our objectives have been achieved.Finally, we have studied the application of several fault detection and treatment techniques: checkpointing, task retry and task replication. Our proposal is to provide an environment able to deal with all types of failures, transparently for the user whenever possible. The main advantage in implementing these mechanisms at the programming model level is that application-level knowledge can be exploited in order to dynamically create a fault tolerance strategy for each application, and avoiding to introduce overhead in error-free environments

    The OTree: multidimensional indexing with efficient data sampling for HPC

    Get PDF
    Spatial big data is considered an essential trend in future scientific and business applications. Indeed, research instruments, medical devices, and social networks generate hundreds of petabytes of spatial data per year. However, many authors have pointed out that the lack of specialized frameworks for multidimensional Big Data is limiting possible applications and precluding many scientific breakthroughs. Paramount in achieving High-Performance Data Analytics is to optimize and reduce the I/O operations required to analyze large data sets. To do so, we need to organize and index the data according to its multidimensional attributes. At the same time, to enable fast and interactive exploratory analysis, it is vital to generate approximate representations of large datasets efficiently. In this paper, we propose the Outlook Tree (or OTree), a novel Multidimensional Indexing with efficient data Sampling (MIS) algorithm. The OTree enables exploratory analysis of large multidimensional datasets with arbitrary precision, a vital missing feature in current distributed data management solutions. Our algorithm reduces the indexing overhead and achieves high performance even for write-intensive HPC applications. Indeed, we use the OTree to store the scientific results of a study on the efficiency of drug inhalers. Then we compare the OTree implementation on Apache Cassandra, named Qbeast, with PostgreSQL and plain storage. Lastly, we demonstrate that our proposal delivers better performance and scalability.Peer ReviewedPostprint (author's final draft

    Variable batched DGEMM

    Get PDF
    Many scientific applications are in need to solve a high number of small-size independent problems. These individual problems do not provide enough parallelism and then, these must be computed as a batch. Today, vendors such as Intel and NVIDIA are developing their own suite of batch routines. Although most of the works focus on computing batches of fixed size, in real applications we can not assume a uniform size for all set of problems. We explore and analyze different strategies based on parallel for, task and taskloop OpenMP pragmas. Although these strategies are straightforward from a programmer's point of view, they have a different impact on performance. We also analyze a new prototype provided by Intel (MKL), which deals with batch operations (cblas dgemm batch). We propose a new approach called grouping. It basically groups a set of problems until filling a limit in terms of memory occupancy or number of operations. In this way, groups composed by different number of problems are distributed on cores, achieving a more balanced distribution in terms of computational cost. This strategy is able to be up to 6× faster than the Intel (MKL) batch routineThis project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 720270 (HBP SGA1), from the Spanish Ministry of Economy and Competitiveness under the project Computacion de Altas Prestaciones VII (TIN2015-65316-P) and the Departament d’Innovacio, Universitats i Empresa de la Generalitat de Catalunya, under project MPEXPAR: Models de Programacio i Entorns d’Execucio Paral·lels (2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    COMP Superscalar, an interoperable programming framework

    Get PDF
    COMPSs is a programming framework that aims to facilitate the parallelization of existing applications written in Java, C/C++ and Python scripts. For that purpose, it offers a simple programming model based on sequential development in which the user is mainly responsible for identifying the functions to be executed as asynchronous parallel tasks and annotating them with annotations or standard Python decorators. A runtime system is in charge of exploiting the inherent concurrency of the code, automatically detecting and enforcing the data dependencies between tasks and spawning these tasks to the available resources, which can be nodes in a cluster, clouds or grids. In cloud environments, COMPSs provides scalability and elasticity features allowing the dynamic provision of resources.This work has been supported by the following institutions: the Spanish Government with grant SEV-2011-00067 of the Severo Ochoa Program and contract Computacion de Altas Prestaciones VI (TIN2012-34557); by the SGR programme (2014-SGR-1051) of the Catalan Government; by the project The Human Brain Project, funded by the European Commission under contract 604102; by the ASCETiC project funded by the European Commission under contract 610874; by the EUBrazilCloudConnect project funded by the European Commission under contract 614048; and by the Intel-BSC Exascale Lab collaboration.Peer ReviewedPostprint (published version

    Automatic, efficient and scalable provenance registration for FAIR HPC workflows

    Get PDF
    Provenance registration is becoming more and more important, as we increase the size and number of experiments performed using computers. In particular, when provenance is recorded in HPC environments, it must be efficient and scalable. In this paper, we propose a provenance registration method for scientific workflows, efficient enough to run in supercomputers (thus, it could run in other environments with more relaxed restrictions, such as distributed ones). It also must be scalable in order to deal with large workflows, that are more typically used in HPC. We also target transparency for the user, shielding them from having to specify how provenance must be recorded. We implement our design using the COMPSs programming model as a Workflow Management System (WfMS) and use RO-Crate as a well-established specification to record and publish provenance. Experiments are provided, demonstrating the run time efficiency and scalability of our solution.This work has been supported by the Spanish Government (PID2019-107255GB-C21), by Generalitat de Catalunya (contract 2017-SGR-01414) and the EU’s Horizon research and innovation programme under Grant agreement No 101058129 (DT-GEO). Also, it has been contributed in the CECH project, co-funded with 50% by the European Regional Development Fund under the framework of the ERFD Operative Programme for Catalunya 2014-2020, with a grant of 1.527.637,88 C. LRN, JMF and SCG are partly supported by INB Grant (PT17/0009/0001 - ISCIII-SGEFI / ERDF), and their work received funding from the EU’s Horizon 2020 research and innovation programme under grant agreements EOSC-Life No 824087, and EJP RD No 825575.Peer ReviewedPostprint (author's final draft

    EMOTIVE: the BSC’s engine for cloud solutions

    Get PDF
    Cloud computing is strongly based on virtualization, allowing applications to be multiplexed onto a physical resource while isolated from other applications sharing that physical resource. This technology simplifies the management of e-Infrastructures, but also requires additional effort if users are to benefit from it. Cloud computing must hide its underlying complexity from users: the key is to provide users with a simple but functional interface for accessing IT resources "as a service", while allowing providers to build costeffective self-managed systems for transparently managing these resources. System developers should be also supported with simple tools that allow them to exploit the facilities of cloud infrastructures.Peer ReviewedPostprint (published version

    Energy efficiency embedded service lifecycle: Towards an energy efficient cloud computing architecture

    Get PDF
    The paper argues the need to provide novel methods and tools to support software developers aiming to optimise energy efficiency and minimise the carbon footprint resulting from designing, developing, deploying and running software in Clouds, while maintaining other quality aspects of software to adequate and agreed levels. A cloud architecture to support energy efficiency at service construction, deployment, and operation is discussed, as well as its implementation and evaluation plans.Postprint (published version

    Graph-based task replication for workflow applications

    No full text
    The Grid is an heterogeneous and dynamic environment which enables distributed computation. This makes it a technology prone to failures. Some related work uses replication to overcome failures in a set of independent tasks, and in workflow applications, but they do not consider possible resource limitations when scheduling the replicas. In this paper, we focus on the use of task replication techniques for workflow applications, trying to achieve not only tolerance to the possible failures in an execution, but also to speed up the computation without demanding the user to implement an application-level checkpoint, which may be a difficult task depending on the application. Moreover, we also study what to do when there are not enough resources for replicating all running tasks. We establish different priorities of replication depending on the graph of the workflow application, giving more priority to tasks with a higher output degree. We have implemented our proposed policy in the GRID superscalar system, and we have run the fastDNAml as an experiment to prove our objectives are reached. Finally, we have identified and studied a problem which may arise due to the use of replication in workflow applications: the replication wait time.Peer Reviewe

    Graph-based task replication for workflow applications

    No full text
    The Grid is an heterogeneous and dynamic environment which enables distributed computation. This makes it a technology prone to failures. Some related work uses replication to overcome failures in a set of independent tasks, and in workflow applications, but they do not consider possible resource limitations when scheduling the replicas. In this paper, we focus on the use of task replication techniques for workflow applications, trying to achieve not only tolerance to the possible failures in an execution, but also to speed up the computation without demanding the user to implement an application-level checkpoint, which may be a difficult task depending on the application. Moreover, we also study what to do when there are not enough resources for replicating all running tasks. We establish different priorities of replication depending on the graph of the workflow application, giving more priority to tasks with a higher output degree. We have implemented our proposed policy in the GRID superscalar system, and we have run the fastDNAml as an experiment to prove our objectives are reached. Finally, we have identified and studied a problem which may arise due to the use of replication in workflow applications: the replication wait time.Peer Reviewe
    corecore