117,076 research outputs found

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    The future of computing beyond Moore's Law.

    Get PDF
    Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'

    Virtual Communication Stack: Towards Building Integrated Simulator of Mobile Ad Hoc Network-based Infrastructure for Disaster Response Scenarios

    Full text link
    Responses to disastrous events are a challenging problem, because of possible damages on communication infrastructures. For instance, after a natural disaster, infrastructures might be entirely destroyed. Different network paradigms were proposed in the literature in order to deploy adhoc network, and allow dealing with the lack of communications. However, all these solutions focus only on the performance of the network itself, without taking into account the specificities and heterogeneity of the components which use it. This comes from the difficulty to integrate models with different levels of abstraction. Consequently, verification and validation of adhoc protocols cannot guarantee that the different systems will work as expected in operational conditions. However, the DEVS theory provides some mechanisms to allow integration of models with different natures. This paper proposes an integrated simulation architecture based on DEVS which improves the accuracy of ad hoc infrastructure simulators in the case of disaster response scenarios.Comment: Preprint. Unpublishe

    BrainFrame: A node-level heterogeneous accelerator platform for neuron simulations

    Full text link
    Objective: The advent of High-Performance Computing (HPC) in recent years has led to its increasing use in brain study through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a single acceleration (or homogeneous) platform to effectively address the complete array of modeling requirements. Approach: In this paper we propose and build BrainFrame, a heterogeneous acceleration platform, incorporating three distinct acceleration technologies, a Dataflow Engine, a Xeon Phi and a GP-GPU. The PyNN framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different instances of a state-of-the-art neuron model, modeling the Inferior- Olivary Nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal- network dimensions but also different network-connectivity circumstances that can drastically change application workload characteristics. Main results: The synthetic approach of three HPC technologies demonstrated that BrainFrame is better able to cope with the modeling diversity encountered. Our performance analysis shows clearly that the model directly affect performance and all three technologies are required to cope with all the model use cases.Comment: 16 pages, 18 figures, 5 table

    SystemC Model Generation for Realistic Simulation of Networked Embedded Systems

    Get PDF
    Verification and design-space exploration of today's embedded systems require the simulation of heterogeneous aspects of the system, i.e., software, hardware, communications. This work shows the use of SystemC to simulate a model-driven specification of the behavior of a networked embedded system together with a complete network scenario consisting of the radio channel, the IEEE 802.15.4 protocol for wireless personal area networks and concurrent traffic sharing the medium. The paper describes the main issues addressed to generate SystemC modules from Matlab/Stateflow descriptions and to integrate them in a complete network scenario. Simulation results on a healthcare wireless sensor network show the validity of the approach
    corecore