7,650 research outputs found

    Engineering simulations for cancer systems biology

    Get PDF
    Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions

    Simulating whole supercomputer applications

    Get PDF
    Architecture simulation tools are extremely useful not only to predict the performance of future system designs, but also to analyze and improve the performance of software running on well know architectures. However, since power and complexity issues stopped the progress of single-thread performance, simulation speed no longer scales with technology: systems get larger and faster, but simulators do not get any faster. Detailed simulation of full-scale applications running on large clusters with hundreds or thousands of processors is not feasible. In this paper we present a methodology that allows detailed simulation of large-scale MPI applications running on systems with thousands of processors with low resource cost. Our methodology allows detailed processor simulation, from the memory and cache hierarchy down to the functional units and the pipeline structure. This feature enables software performance analysis beyond what performance counters would allow. In addition, it enables performance prediction targeting non-existent architectures and systems, that is, systems for which no performance data can be used as a reference. For example, detailed analysis of the weather forecasting application WRF reveals that it is highly optimized for cache locality, and is strongly compute bound, with faster functional units having the greatest impact on its performance. Also, analysis of next-generation CMP clusters show that performance may start to decline beyond 8 processors per chip due to shared resource contention, regardless of the benefits of through-memory communication.Postprint (published version

    A grid-enabled problem solving environment for parallel computational engineering design

    Get PDF
    This paper describes the development and application of a piece of engineering software that provides a problem solving environment (PSE) capable of launching, and interfacing with, computational jobs executing on remote resources on a computational grid. In particular it is demonstrated how a complex, serial, engineering optimisation code may be efficiently parallelised, grid-enabled and embedded within a PSE. The environment is highly flexible, allowing remote users from different sites to collaborate, and permitting computational tasks to be executed in parallel across multiple grid resources, each of which may be a parallel architecture. A full working prototype has been built and successfully applied to a computationally demanding engineering optimisation problem. This particular problem stems from elastohydrodynamic lubrication and involves optimising the computational model for a lubricant based on the match between simulation results and experimentally observed data

    Prediction of the impact of network switch utilization on application performance via active measurement

    Get PDF
    Although one of the key characteristics of High Performance Computing (HPC) infrastructures are their fast interconnecting networks, the increasingly large computational capacity of HPC nodes and the subsequent growth of data exchanges between them constitute a potential performance bottleneck. To achieve high performance in parallel executions despite network limitations, application developers require tools to measure their codes’ network utilization and to correlate the network’s communication capacity with the performance of their applications. This paper presents a new methodology to measure and understand network behavior. The approach is based in two different techniques that inject extra network communication. The first technique aims to measure the fraction of the network that is utilized by a software component (an application or an individual task) to determine the existence and severity of network contention. The second injects large amounts of network traffic to study how applications behave on less capable or fully utilized networks. The measurements obtained by these techniques are combined to predict the performance slowdown suffered by a particular software component when it shares the network with others. Predictions are obtained by considering several training sets that use raw data from the two measurement techniques. The sensitivity of the training set size is evaluated by considering 12 different scenarios. Our results find the optimum training set size to be around 200 training points. When optimal data sets are used, the proposed methodology provides predictions with an average error of 9.6% considering 36 scenarios.With the support of the Secretary for Universities and Research of the Ministry of Economy and Knowledge of the Government of Catalonia and the Cofund programme of the Marie Curie Actions of the 7th R&D Framework Programme of the European Union (Expedient 2013BP_B00243). The research leading to these results has received funding from the European Research Council under the European Union’s 7th FP (FP/2007-2013) /ERC GA n. 321253. Work partially supported by the Spanish Ministry of Science and Innovation (TIN2012-34557)Peer ReviewedPostprint (author's final draft

    Development of CFD Thermal Hydraulics and Neutron Kinetics Coupling Methodologies for the Prediction of Local Safety Parameters for Light Water Reactors

    Get PDF
    This dissertation contributes to the development of high-fidelity coupled neutron kinetic and thermal hydraulic simulation tools with high resolution of the spatial discretization of the involved domains for the analysis of Light Water Reactors transient scenarios

    Spectral-Element and Adjoint Methods in Seismology

    Get PDF
    We provide an introduction to the use of the spectral-element method (SEM) in seismology. Following a brief review of the basic equations that govern seismic wave propagation, we discuss in some detail how these equations may be solved numerically based upon the SEM to address the forward problem in seismology. Examples of synthetic seismograms calculated based upon the SEM are compared to data recorded by the Global Seismographic Network. Finally, we discuss the challenge of using the remaining differences between the data and the synthetic seismograms to constrain better Earth models and source descriptions. This leads naturally to adjoint methods, which provide a practical approach to this formidable computational challenge and enables seismologists to tackle the inverse problem

    Continuous measurements of greenhouse gases and atmospheric oxygen at the Namib Desert atmospheric observatory

    Get PDF
    A new coastal background site has been established for observations of greenhouse gases (GHGs) in the central Namib Desert at Gobabeb, Namibia. The location of the site was chosen to provide observations for a data-poor region in the global sampling network for GHGs. Semi-automated continuous measurements of carbon dioxide, methane, nitrous oxide, carbon monoxide, atmospheric oxygen, and basic meteorology are made at a height of 21 m a.g.l., 50 km from the coast at the northern border of the Namib Sand Sea. Atmospheric oxygen is measured with a differential fuel cell analyzer (DFCA). Carbon dioxide and methane are measured with an early-model cavity ring-down spectrometer (CRDS); nitrous oxide and carbon monoxide are measured with an off-axis integrated cavity output spectrometer (OA-ICOS). Instrument-specific water corrections are employed for both the CRDS and OA-ICOS instruments in lieu of drying. The performance and measurement uncertainties are discussed in detail. As the station is located in a remote desert environment, there are some particular challenges, namely fine dust, high diurnal temperature variability, and minimal infrastructure. The gas handling system and calibration scheme were tailored to best fit the conditions of the site. The CRDS and DFCA provide data of acceptable quality when base requirements for operation are met, specifically adequate temperature control in the laboratory and regular supply of electricity. In the case of the OA-ICOS instrument, performance is significantly improved through the implementation of a drift correction through frequent measurements of a reference cylinder
    • …
    corecore