183 research outputs found

    Heterogeneous hierarchical workflow composition

    Get PDF
    Workflow systems promise scientists an automated end-to-end path from hypothesis to discovery. However, expecting any single workflow system to deliver such a wide range of capabilities is impractical. A more practical solution is to compose the end-to-end workflow from more than one system. With this goal in mind, the integration of task-based and in situ workflows is explored, where the result is a hierarchical heterogeneous workflow composed of subworkflows, with different levels of the hierarchy using different programming, execution, and data models. Materials science use cases demonstrate the advantages of such heterogeneous hierarchical workflow composition.This work is a collaboration between Argonne National Laboratory and the Barcelona Supercomputing Center within the Joint Laboratory for Extreme-Scale Computing. This research is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC02- 06CH11357, program manager Laura Biven, and by the Spanish Government (SEV2015-0493), by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), by Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    Semantic resource management and interoperability between distributed computing platforms

    Get PDF
    Distributed Computing is the paradigm where the application execution is distributed across different computers connected by a communication network. Distributed Computing platforms have evolved very fast during the las decades: starting from Clusters, where a set of computers were working together in a single location; then evolving to the Grids, where computing resources are shared by different entities, creating a global computing infrastructure which is available to different user communities; and finally becoming in what is currently known as the Cloud, where computing and data resources are provided, on demand, in a very dynamic fashion, and following the Utility Computing model where you pay only for what you consume. Different types of companies and institutions are exploring the potential benefits of moving their IT services and applications to Cloud infrastructures, in order to decouple the management of computing resources from their core business process to become more productive. Nevertheless, migrating software to Clouds is not an easy task, since it requires a deep knowledge of the technology to decompose the application and the capabilities offered by providers and how to use them. Besides this complex deployment process, the current cloud market place has several providers offering resources with different capabilities, prices and quality, and each provider uses their own properties and APIs for describing and accessing their resources. Therefore, when customers want to execute an application in the providers' resources, they must understand the different providers' description, compare them and select the most suitable resources for their interests. Once the provider and resources have been selected, developers have to inter-operate with the different providers' interfaces to perform the application execution steps. To do all the mentioned steps, application developers have to deal with the design and implementation of complex integration procedures. This thesis presents several contributions to overcome the aforementioned problems by providing a platform that facilitates and automates the integration of applications in different providers' infrastructures lowering the barrier of adopting new distributed computing infrastructure such as Clouds. The achievement of this objective has been split in several parts. In the first part, we have studied how semantic web technologies are helping to describe applications and to automatically infer a model for deploying them in a distributed platform. Once the application deployment model has been inferred, the second step is finding the resources to deploy and execute the different application components. Regarding this topic, we have studied how semantic web technologies can be applied in the resource allocation problem. Once the different components have been allocated in the providers' resources, it is time to deploy and execute the application components on these resources by invoking a workflow of provider API calls. However, every provider defines their own management interfaces, so the workflow to perform the same actions is different depending on the selected provider. In this thesis, we propose a framework to automatically infer the workflow of provider interface calls required to perform any resource management tasks. In the last part of the thesis, we have studied how to introduce the benefits of software agents for coordinating the application management in distributed platforms. We propose a multi-agent system which is in charge of coordinating the different steps of the application deployment in a distributed way as well as monitoring the correct execution of the application in the computing resources. The different contributions have been validated with a prototype implementation and a set of use cases.La Computación Distribuida es un paradigma donde la ejecución de aplicaciones se distribuye entre diferentes computadores contados a través de una red de comunicación. Las plataformas de computación distribuida han evolucionado rápidamente durante las últimas décadas, empezando por los "Clusters", donde varios computadores están conectados por una red local; pasando por los "Grids", donde los recursos computacionales son compartidos por varias instituciones creando un red de computación global; llegando finalmente a lo que actualmente conocemos como "Clouds", donde nos podemos proveer de recursos de manera dinámica, bajo demanda y pagando solo por lo que consumimos. Actualmente, varias compañías están descubriendo los beneficios de mover sus aplicaciones a las infraestructuras Cloud, desacoplando la administración de los recursos computacionales de su "core business" para ser más productivos. Sin embargo migrar el software al Cloud no es una tarea fácil porque se requiere un conocimiento exhaustivo de la tecnología y como usar los servicios ofrecidos por los diferentes proveedores. Además cada proveedor ofrece recursos con diferentes capacidades, precios y calidades, con su propia interfaz para acceder a ellos. Por consiguiente, cuando un usuario quiere ejecutar una aplicación en el Cloud, debe entender que ofrece cada proveedor y como usarlo y una vez que ha elegido debe programar los diferentes pasos del despliegue de su aplicación. Si además se quieren usar varios proveedores o cambiar a otro, este proceso debe repetirse varias veces. Esta tesis presenta varias contribuciones para mitigar estos problemas diseñando una plataforma para facilitar y automatizar la integración de aplicaciones en los diferentes proveedores. Estas contribuciones se dividen en varias partes: Primero, el estudio de como las tecnologías semánticas pueden ayudar para describir aplicaciones y automáticamente inferir como se puede desplegar en un plataforma distribuida. Una vez obtenemos este modelo de despliegue, la segunda contribución nos presenta como estas mismas tecnologías pueden usarse para asignar las diferentes partes del despliegue de la aplicación a los recursos de los proveedores. Una vez sabemos la asignación, la siguiente contribución nos resuelve como se puede usar "AI planning" para encontrar la secuencia de servicios que se deben ejecutar para realizar el despliegue deseado. Finalmente, la última parte de la tesis, nos presenta como el despliegue y ejecuciones de las aplicaciones puede coordinarse por un sistema multi-agentes de una manera escalable y distribuida. Las diferentes contribuciones de la tesis han sido validadas mediante la implementación de prototipos y casos de uso

    Ser y misión del artista

    Get PDF

    An amplifier-less acquisition chain for power measurements in series resonant inverters

    Get PDF
    Successive approximation register (SAR) analog-to-digital converter (ADC) manufacturers recommend the use of a driver amplifier to achieve the best performance. When a driver amplifier is not used, the conversion speed is severely penalized because of the need to meet the settling time constraint. This paper proposes a simple digital correction method to raise the performance (conversion speed and/or accuracy) when the acquisition chain lacks a driver amplifier. It is intended to reduce the cost, size and power consumption of the conditioning circuit while maintaining acceptable performance. The method is applied to the measurement of the output power delivered by a series resonant inverter for domestic induction heating

    Review of national and international initiatives on books and book publishers assessment

    Get PDF
    This article presents various systems for assessing academic books and/or book publishers in several European countries and two in Latin America. It has been structured according to the methodologies used in each system: expert opinion, reviews, holdings in academic libraries, specialization, original selection procedures, citations, and systems integrating different variables. The objective is to offer a panoramic view for evaluators, authors, librarians, and editors to use in decision making. Also included are conclusions about various assessment systems, their potential, and the optimum conditions for their use in practice

    Scholarly publishers’ indicators: Prestige, specialization, and review systems of scholarly book publishers

    Get PDF
    This work presents the updated version of the public information system SPI (Scholarly Publishers’ Indicators), developed by ÍLIA (research group on scholarly books), which belongs to the Spanish National Research Council. SPI contains three types of indicators about book publishers: prestige according to expert opinions; thematic specialization according to Dilve (information on Spanish books for sale) classification; and manuscript selection procedures according to each publisher’s answers to a survey. SPI Expanded is also described as an information system which provides information about each scholarly publisher’s indexation in four international information systems. The methodological specifications for the design of SPI Expanded in each of the dimensions are presented. Finally, the functionalities and current use as a reference in the assessment process of scholarly publishers’ output are detailed

    Storage-heterogeneity aware task-based programming models to optimize I/O intensive applications

    Get PDF
    Task-based programming models have enabled the optimized execution of the computation workloads of applications. These programming models can take advantage of large-scale distributed infrastructures by allowing the parallel and distributed execution of applications in high-level work components called tasks. Nevertheless, in the era of Big Data and Exascale, the amount of data produced by modern scientific applications has already surpassed terabytes and is rapidly increasing. Hence, I/O performance became the bottleneck to overcome in order to achieve more total performance improvement. New storage technologies offer higher bandwidth and faster solutions than traditional Parallel File Systems (PFS). Such storage devices are deployed in modern day infrastructures to boost I/O performance by offering a fast layer that absorbs the generated data. Therefore, it is necessary for any programming model targeting more performance to manage this heterogeneity and take advantage of it to improve the I/O performance of applications. Towards this goal, we propose in this paper a set of programming model capabilities that we refer to as Storage-Heterogeneity Awareness. Such capabilities include: (i) abstracting the heterogeneity of storage systems, and (ii) optimizing I/O performance by supporting dedicated I/O schedulers and an automatic data flushing technique. The evaluation section of this paper presents the performance results of different applications on the MareNostrum CTE-Power heterogeneous storage cluster. Our experiments demonstrate that a storage-heterogeneity aware programming model can achieve up to almost 5x I/O performance speedup and 48% total time improvement compared to the reference PFS-based usage of the execution infrastructure.This work is partially supported by the European Union through the Horizon 2020 research and innovation programme under contracts 721865 (EXPERTISE Project) by the Spanish Government (PID2019-107255GB) and the Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft
    corecore