4,872 research outputs found

    Modelling and simulation framework for reactive transport of organic contaminants in bed-sediments using a pure java object - oriented paradigm

    Get PDF
    Numerical modelling and simulation of organic contaminant reactive transport in the environment is being increasingly relied upon for a wide range of tasks associated with risk-based decision-making, such as prediction of contaminant profiles, optimisation of remediation methods, and monitoring of changes resulting from an implemented remediation scheme. The lack of integration of multiple mechanistic models to a single modelling framework, however, has prevented the field of reactive transport modelling in bed-sediments from developing a cohesive understanding of contaminant fate and behaviour in the aquatic sediment environment. This paper will investigate the problems involved in the model integration process, discuss modelling and software development approaches, and present preliminary results from use of CORETRANS, a predictive modelling framework that simulates 1-dimensional organic contaminant reaction and transport in bed-sediments

    On Modelling and Analysis of Dynamic Reconfiguration of Dependable Real-Time Systems

    Full text link
    This paper motivates the need for a formalism for the modelling and analysis of dynamic reconfiguration of dependable real-time systems. We present requirements that the formalism must meet, and use these to evaluate well established formalisms and two process algebras that we have been developing, namely, Webpi and CCSdp. A simple case study is developed to illustrate the modelling power of these two formalisms. The paper shows how Webpi and CCSdp represent a significant step forward in modelling adaptive and dependable real-time systems.Comment: Presented and published at DEPEND 201

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Enhancing Energy Production with Exascale HPC Methods

    Get PDF
    High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.The research leading to these results has received funding from the European Union's Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement n° 689772, the Spanish Ministry of Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and from the Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the Intel Corporation, which enabled us to obtain the presented experimental results in uncertainty quantification in seismic imagingPostprint (author's final draft

    Fast semidefinite programming with feedforward neural networks

    Full text link
    Semidefinite programming is an important optimization task, often used in time-sensitive applications. Though they are solvable in polynomial time, in practice they can be too slow to be used in online, i.e. real-time applications. Here we propose to solve feasibility semidefinite programs using artificial neural networks. Given the optimization constraints as an input, a neural network outputs values for the optimization parameters such that the constraints are satisfied, both for the primal and the dual formulations of the task. We train the network without having to exactly solve the semidefinite program even once, thus avoiding the possibly time-consuming task of having to generate many training samples with conventional solvers. The neural network method is only inconclusive if both the primal and dual models fail to provide feasible solutions. Otherwise we always obtain a certificate, which guarantees false positives to be excluded. We examine the performance of the method on a hierarchy of quantum information tasks, the Navascu\'es-Pironio-Ac\'in hierarchy applied to the Bell scenario. We demonstrate that the trained neural network gives decent accuracy, while showing orders of magnitude increase in speed compared to a traditional solver
    corecore