1,128 research outputs found

    Mathematical modelling of curtain coating

    Get PDF
    We present a simple mathematical model for the fluid flow in the curtain coating process, exploiting the small aspect ratio, and examine the model in the large-Reynolds-number limit of industrial interest. We show that the fluid is in free fall except for a region close to the substrate, but find that the model can not describe the turning of the curtain onto the substrate. We find that the inclusion of a viscous bending moment close to the substrate allows the curtain to ā€œturn the cornerā€

    Predictive analysis of a hydrodynamics application on large-scale CMP clusters

    Get PDF
    We present the development of a predictive performance model for the high-performance computing code Hydra, a hydrodynamics benchmark developed and maintained by the United Kingdom Atomic Weapons Establishment (AWE). The developed model elucidates the parallel computation of Hydra, with which it is possible to predict its runtime and scaling performance on varying large-scale chip multiprocessor (CMP) clusters. A key feature of the model is its granularity; with the model we are able to separate the contributing costs, including computation, point-to-point communications, collectives, message buffering and message synchronisation. The predictions are validated on two contrasting large-scale HPC systems, an AMD Opteron/ InfiniBand cluster and an IBM BlueGene/P, both of which are located at the Lawrence Livermore National Laboratory (LLNL) in the US. We validate the model on up to 2,048 cores, where it achieves a > 85% accuracy in weak-scaling studies. We also demonstrate use of the model in exposing the increasing costs of collectives for this application, and also the influence of node density on network accesses, therefore highlighting the impact of machine choice when running this hydrodynamics application at scale

    Optimisation of patch distribution strategies for AMR applications

    Get PDF
    As core counts increase in the world's most powerful supercomputers, applications are becoming limited not only by computational power, but also by data availability. In the race to exascale, efficient and effective communication policies are key to achieving optimal application performance. Applications using adaptive mesh refinement (AMR) trade off communication for computational load balancing, to enable the focused computation of specific areas of interest. This class of application is particularly susceptible to the communication performance of the underlying architectures, and are inherently difficult to scale efficiently. In this paper we present a study of the effect of patch distribution strategies on the scalability of an AMR code. We demonstrate the significance of patch placement on communication overheads, and by balancing the computation and communication costs of patches, we develop a scheme to optimise performance of a specific, industry-strength, benchmark application

    Generalized Qualification and Qualification Levels for Spectral Regularization Methods

    Get PDF
    The concept of qualification for spectral regularization methods for inverse ill-posed problems is strongly associated to the optimal order of convergence of the regularization error. In this article, the definition of qualification is extended and three different levels are introduced: weak, strong and optimal. It is shown that the weak qualification extends the definition introduced by Mathe and Pereverzev in 2003, mainly in the sense that the functions associated to orders of convergence and source sets need not be the same. It is shown that certain methods possessing infinite classical qualification, e.g. truncated singular value decomposition (TSVD), Landweber's method and Showalter's method, also have generalized qualification leading to an optimal order of convergence of the regularization error. Sufficient conditions for a SRM to have weak qualification are provided and necessary and sufficient conditions for a given order of convergence to be strong or optimal qualification are found. Examples of all three qualification levels are provided and the relationships between them as well as with the classical concept of qualification and the qualification introduced by Mathe and Perevezev are shown. In particular, spectral regularization methods having extended qualification in each one of the three levels and having zero or infinite classical qualification are presented. Finally several implications of this theory in the context of orders of convergence, converse results and maximal source sets for inverse ill-posed problems, are shown.Comment: 20 pages, 1 figur

    Performance optimisation of inertial confinement fusion codes using mini-applications

    Get PDF
    Despite the recent successes of nuclear energy researchers, the scientific community still remains some distance from being able to create controlled, self-sustaining fusion reactions. Inertial Confinement Fusion (ICF) techniques represent one possible option to surpass this barrier, with scientific simulation playing a leading role in guiding and supporting their development. The simulation of such techniques allows for safe and efficient investigation of laser design and pulse shaping, as well as providing insight into the reaction as a whole. The research presented here focuses on the simulation code EPOCH, a fully relativistic particle-in-cell plasma physics code concerned with faithfully recreating laser-plasma interactions at scale. A significant challenge in developing large codes like EPOCH is maintaining effective scientific delivery on successive generations of high-performance computing architecture. To support this process, we adopt the use of mini-applications -- small code proxies that encapsulate important computational properties of their larger parent counterparts. Through the development of a mini-application for EPOCH (called miniEPOCH), we investigate a variety of the performance features exhibited in EPOCH, expose opportunities for optimisation and increased scientific capability, and offer our conclusions to guide future changes to similar ICF codes

    A randomised feasibility study of serial magnetic resonance imaging to reduce treatment times in Charcot neuroarthropathy in people with diabetes (CADOM): A protocol

    Get PDF
    Background Charcot neuroarthropathy is a complication of peripheral neuropathy associated with diabetes which most frequently affects the lower limb. It can cause fractures and dislocations within the foot, which may progress to deformity and ulceration. Recommended treatment is immobilisation and offloading, with a below knee non-removable cast or boot. Duration of treatment varies from six months to more than one year. Small observational studies suggest that repeated assessment with Magnetic Resonance Imaging improves decision making about when to stop treatment, but this has not been tested in clinical trials. This study aims to explore the feasibility of using serial Magnetic Resonance Imaging without contrast in the monitoring of Charcot neuroarthropathy to reduce duration of immobilisation of the foot. A nested qualitative study aims to explore participantsā€™ lived experience of Charcot neuroarthropathy and of taking part in the feasibility study. Methods We will undertake a two arm, open study, and randomise 60 people with a suspected or confirmed diagnosis of Charcot neuroarthropathy from five NHS, secondary care multidisciplinary Diabetic Foot Clinics across England. Participants will be randomised 1:1 to receive Magnetic Resonance Imaging at baseline and remission up to 12 months, with repeated foot temperature measurements and x-rays (standard care plus), or standard care plus with additional three-monthly Magnetic Resonance Imaging until remission up to 12 months (intervention). Time to confirmed remission of Charcot neuroarthropathy with off-loading treatment (days) and its variance will be used to inform sample size in a full-scale trial. We will look for opportunities to improve the protocols for monitoring techniques and the clinical, patient centred, and health economic measures used in a future study. For the nested qualitative study, we will invite a purposive sample of 10-14 people able to offer maximally varying experiences from the feasibility study to take part in semi-structured interviews to be analysed using thematic analysis. Discussion The study will inform the decision whether to proceed to a full-scale trial. It will also allow deeper understanding of the lived experience of Charcot neuroarthropathy, and factors that contribute to engagement in management and contribute to the development of more effective patient centred strategies. Trial registration ISRCTN, ISRCTN, 74101606. Registered on 6 November 2017, http://www.isrctn.com/ISRCTN74101606?q=CADom&filters=&sort=&offset=1&totalResults=1&page=1&pageSize=10&searchType=basic-searc
    • ā€¦
    corecore