9 research outputs found

    A workflow runtime environment for manycore parallel architectures

    Get PDF
    We introduce a new Manycore Workflow Runtime Environment (MWRE) to efficiently enact traditional scientific workflows on modern manycore computing architectures. MWRE is compiler-based and translates workflows specified in the XML-based Interoperable Workflow Intermediate Representation (IWIR) into an equivalent C++-based program. This program efficiently enacts the workflow as a stand-alone executable by means of a new callback mechanism that resolves dependencies, transfers data, and handles composite activities. Furthermore, a core feature of MWRE is explicit support for full-ahead scheduling and enactment. Experimental results on a number of real-world workflows demonstrate that MWRE clearly outperforms existing Java-based workflow engines designed for distributed (Grid or Cloud) computing infrastructures in terms of enactment time, is generally better than an existing script-based engine for manycore architectures (Swift), and sometimes gets even close to an artificial baseline implementation of the workflows in the standard OpenMP language for shared memory systems. Experimental results also show that full-ahead scheduling with MWRE using a state-of-the-art heuristic can improve the workflow performance up to 40%.(VLID)2196062Accepted versio

    Bringing scientific workflows to Amazon SWF

    Get PDF
    In response to the ever-increasing needs of scientific applications for resources, Cloud computing emerged as an alternative on-demand and cost-effective resource provisioning approach. In this context, Cloud providers have recognised the importance of workflow applications to science and provide their own native solutions, such as the Amazon Simple Workflow Service (SWF). Nevertheless, an important downside of SWF is its incompatibility with existing workflow systems, and lack of means for reusing scientific legacy code. Similarly, existing workflow middlewares and applications require non-trivial extensions to take advantage of Cloud resources. We present in this paper a software engineering solution that allows the scientific workflow community access the Amazon Cloud through one single front-end converter, and propose a legacy wrapper service for executing legacy code using SWF. Empirical results using a real-world scientific workflow demonstrate that our automatically generated SWF application performs almost as fast as a native manually-optimised version, and outperforms other workflow middleware systems using the Amazon Cloud.(VLID)2199150Accepted versio

    Fine-Grain Interoperability of Scientific Workflows in Distributed Computing Infrastructures

    Get PDF
    Today there exist a wide variety of scientific workflow management systems, each designed to fulfill the needs of a certain scientific community. Unfortunately, once a workflow application has been designed in one particular system it becomes very hard to share it with users working with different systems. Portability of workflows and interoperability between current systems barely exists. In this work, we present the fine-grained interoperability solution proposed in the SHIWA European project that brings together four representative European workflow systems: ASKALON, MOTEUR, WS-PGRADE, and Triana. The proposed interoperability is realised at two levels of abstraction: abstract and concrete. At the abstract level, we propose a generic Interoperable Workflow Intermediate Representation (IWIR) that can be used as a common bridge for translating workflows between different languages independent of the underlying distributed computing infrastructure. At the concrete level, we propose a bundling technique that aggregates the abstract IWIR representation and concrete task representations to enable workflow instantiation, execution and scheduling. We illustrate case studies using two real-workflow applications designed in a native environment and then translated and executed by a foreign workflow system in a foreign distributed computing infrastructure. © 2013 Springer Science+Business Media Dordrecht

    Scientific workflows on many-cores

    No full text
    Moderne shared-memory heterogene many-core computer werden immer komplexer zu programmieren. Traditionelle Programmiermethoden fĂŒr shared-memory computer sind immer noch auf symmetrische Multiprozessorsysteme ausgelegt und haben daher Probleme modernen heterogenenen Systemen optimal auszulasten. Wissenschafliche Workflow Anwendungen, welche aus dem Bereich der verteilten Systemen wie Grids und Clouds kommen, reprĂ€sentieren eine vielversprechende Alternative fĂŒr die Entwicklung und AusfĂŒhrung von wissenschaftlichen Anwendungen auf modernen heterogenen many-core Computern. Allerdings besitzen verteilte Systeme andere Eigenschaften als shared-memory many-core Systeme, und daher mĂŒssen neuartige Workflow Systeme fĂŒr many-core computer erforscht und entwickelt werden um mit existierenden shared-memory Programmiermethoden mithalten zu können. Das Many-core Workflow Runtime Environment (MWRE), welches in dieser Arbeit vorgestellt wird, stellt so ein neuartiges Workflow System fĂŒr heterogene Many-core Computer dar. MWRE ist Compiler-basiert und ĂŒbersetzt eine Workflow Anwendung, welche in der XML-basierten Workflow Sprache IWIR spezifiert wurde, in eine Ă€quivalente C++ Anwendung. Diese Anwendung fĂŒhrt den Workflow mithilfe einer Callback-basierten Engine aus, welche die Auflösung von AbhĂ€ngigkeiten, Datentransfers und zusammengesetze AktivitĂ€ten mithilfe von sogenannten Callback Funktionen ausfĂŒhrt. Auserdem ist ein zentrales Feature die UnterstĂŒtzung von full-ahead Scheduling. Die experimentellen Resultate zeigen, dass MWRE eine bessere Performance als traditionelle Grid und Cloud Workflow Systeme auf Many-core Computern hat und auch kompetitiv zu traditionellen Programmiermethoden wie OpenMP ist. Des Weiteren, konnten wir in dieser Arbeit auch zeigen, dass full-ahead Scheduling und Compilertransformationen auf auf heterogenen Many-core Computern die Performance von Workflow Anwendungen verbessern kann.Modern shared-memory heterogeneous many-core computers have become increasingly complex to program. Traditional programming paradigms for shared-memory computers have their roots in symmetric multi-processing and, therefore, are struggling to fully exploit the performance of todays heterogeneous systems. The scientific workflow paradigm, originating from distributed computing infrastructures like Grids and Clouds, represents today a promising alternative for development and execution of scientific applications on shared-memory heterogeneous many-core computers. However, distributed computing architectures are inherently different from shared-memory many-core systems making it necessary to research and design a new kind of workflow system being able to compete with existing shared-memory programming paradigms. The Many-core Workflow Runtime Environment (MWRE) presented in this thesis is a scientific workflow system specifically designed to efficiently enact traditional scientific workflows on modern shared-memory heterogeneous many-core computing architectures. MWRE is compiler-based and translates workflows specified in the XML-based Interoperable Workflow Intermediate Representation (IWIR) into an equivalent C++-based program. This program efficiently enacts the workflow as a stand-alone executable by means of a new callback mechanism that resolves dependencies, transfers data, and handles composite activities. Furthermore, a core feature of MWRE is explicit support for full-ahead scheduling and enactment. Experimental results show that MWRE performs better than traditional scientific workflow systems on many-core computers and is able to compete with traditional shared-memory programming paradigms like OpenMP. Moreover, in this thesis we also demonstrate that full-ahead scheduling using state-of-the-art heuristics and compiler transformations can significantly improve the performance of scientific workflow applications on many-core computers.by Matthias JanetschekUniversity of Innsbruck, Dissertation, 2018OeBB(VLID)253072

    Perspectives on Socially Intelligent Conversational Agents

    No full text
    The propagation of digital assistants is consistently progressing. Manifested by an uptake of ever more human-like conversational abilities, respective technologies are moving increasingly away from their role as voice-operated task enablers and becoming rather companion-like artifacts whose interaction style is rooted in anthropomorphic behavior. One of the required characteristics in this shift from a utilitarian tool to an emotional character is the adoption of social intelligence. Although past research has recognized this need, more multi-disciplinary investigations should be devoted to the exploration of relevant traits and their potential embedding in future agent technology. Aiming to lay a foundation for further developments, we report on the results of a Delphi study highlighting the respective opinions of 21 multi-disciplinary domain experts. Results exhibit 14 distinctive characteristics of social intelligence, grouped into different levels of consensus, maturity, and abstraction, which may be considered a relevant basis, assisting the definition and consequent development of socially intelligent conversational agents

    Programming models and runtimes

    No full text
    International audienceSeveral millions of execution flows will be executed in ultrascale computing systems (UCS), and the task for the programmer to understand their coherency and for the runtime to coordinate them is unfathomable. Moreover, related to UCS large scale and their impact on reliability, the current static point of view is not more sufficient. A runtime cannot consider to restart an application because of the failure of a single node as statically several nodes will fail every day. Classical management of these failures by the programmers using checkpoint restart is also too limited due to the overhead at such a scale. The article explores programming models and runtimes required to facilitate the task of scaling and extracting performance on continuously evolving platforms, while providing resilience and fault-tolerant mechanisms to tackle the increasing probability of failures throughout the whole software stack

    Effect of Hospital and Surgeon Case Volume on Perioperative Quality of Care and Short-term Outcomes After Radical Cystectomy for Muscle-invasive Bladder Cancer: Results From a European Tertiary Care Center Cohort

    No full text
    This prospective multicenter study analyzed the effect of hospital and surgeon case volume on perioperative quality of care and short-term complications and mortality in 479 patients undergoing radical cystectomy for bladder cancer. We found that hospital volume might represent an at least equally important factor regarding postoperative complications as the surgeon case volume itself at European tertiary care centers. Background Case volume has been suggested to affect surgical outcomes in different arrays of procedures. We aimed to delineate the relationship between case volume and surgical outcomes and quality of care criteria of radical cystectomy (RC) in a prospectively collected multicenter cohort. Patients and Methods This was a retrospective analysis of a prospectively collected European cohort of patients with bladder cancer treated with RC in 2011. We relied on 479 and 459 eligible patients with available information on hospital case volume and surgeon case volume, respectively. Hospital case volume was divided into tertiles, and surgeon volume was dichotomized according to the median annual number of surgeries performed. Binomial generalized estimating equations controlling for potential known confounders and inter-hospital clustering assessed the independent association of case volume with short-term complications and mortality, as well as the fulfillment of quality of care criteria. Results The high-volume threshold for hospitals was 45 RCs and, for high-volume surgeons, was > 15 cases annually. In adjusted analyses, high hospital volume remained an independent predictor of fewer 30-day (odds ratio, 0.34; P = .002) and 60- to 90-day (odds ratio, 0.41; P = .03) major complications but not of fulfilling quality of care criteria or mortality. No difference between surgeon volume groups was noted for complications, quality of care criteria, or mortality after adjustments. Conclusion The coordination of care at high-volume hospitals might confer a similar important factor in postoperative outcomes as surgeon case volume in RC. This points to organizational elements in high-volume hospitals that enable them to react more appropriately to adverse events after surgery. (C) 2017 Elsevier Inc. All rights reserved

    Prospective Observational Study of Pazopanib in Patients with Advanced Renal Cell Carcinoma (PRINCIPAL Study)

    No full text
    corecore