28,678 research outputs found

    The effect of internal pipe wall roughness on the accuracy of clamp-on ultrasonic flow meters

    Get PDF
    Clamp-on transit-time ultrasonic flowmeters (UFMs) suffer from poor accuracy compared with spool-piece UFMs due to uncertainties that result from the in-field installation process. One of the important sources of uncertainties is internal pipe wall roughness which affects the flow profile and also causes significant scattering of ultrasound. This paper purely focuses on the parametric study to quantify the uncertainties (related to internal pipe wall roughness) induced by scattering of ultrasound and it shows that these effects are large even without taking into account the associated flow disturbances. The flowmeter signals for a reference clamp-on flowmeter setup were simulated using 2-D finite element analysis including simplifying assumptions (to simulate the effect of flow) that were deemed appropriate. The validity of the simulations was indirectly verified by carrying out experiments with different separation distances between ultrasonic probes. The error predicted by the simulations and the experimentally observed errors were in good agreement. Then, this simulation method was applied on pipe walls with rough internal surfaces. For ultrasonic waves at 1 MHz, it was found that compared with smooth pipes, pipes with only a moderately rough internal surface (with 0.2-mm rms and 5-mm correlation length) can exhibit systematic errors of 2 in the flow velocity measurement. This demonstrates that pipe internal surface roughness is a very important factor that limits the accuracy of clamp on UFMs

    Inner product computation for sparse iterative solvers on\ud distributed supercomputer

    Get PDF
    Recent years have witnessed that iterative Krylov methods without re-designing are not suitable for distribute supercomputers because of intensive global communications. It is well accepted that re-engineering Krylov methods for prescribed computer architecture is necessary and important to achieve higher performance and scalability. The paper focuses on simple and practical ways to re-organize Krylov methods and improve their performance for current heterogeneous distributed supercomputers. In construct with most of current software development of Krylov methods which usually focuses on efficient matrix vector multiplications, the paper focuses on the way to compute inner products on supercomputers and explains why inner product computation on current heterogeneous distributed supercomputers is crucial for scalable Krylov methods. Communication complexity analysis shows that how the inner product computation can be the bottleneck of performance of (inner) product-type iterative solvers on distributed supercomputers due to global communications. Principles of reducing such global communications are discussed. The importance of minimizing communications is demonstrated by experiments using up to 900 processors. The experiments were carried on a Dawning 5000A, one of the fastest and earliest heterogeneous supercomputers in the world. Both the analysis and experiments indicates that inner product computation is very likely to be the most challenging kernel for inner product-based iterative solvers to achieve exascale

    Minimizing synchronizations in sparse iterative solvers for distributed supercomputers

    Get PDF
    Eliminating synchronizations is one of the important techniques related to minimizing communications for modern high performance computing. This paper discusses principles of reducing communications due to global synchronizations in sparse iterative solvers on distributed supercomputers. We demonstrates how to minimizing global synchronizations by rescheduling a typical Krylov subspace method. The benefit of minimizing synchronizations is shown in theoretical analysis and is verified by numerical experiments using up to 900 processors. The experiments also show the communication complexity for some structured sparse matrix vector multiplications and global communications in the underlying supercomputers are in the order P1/2.5 and P4/5 respectively, where P is the number of processors and the experiments were carried on a Dawning 5000A

    A Comment on "Memory Effects in an Interacting Magnetic Nanoparticle System"

    Full text link
    Recently, Sun et al reported that striking memory effects had been clearly observed in their new experiments on an interacting nanoparticle system [1]. They claimed that the phenomena evidenced the existence of a spin-glass-like phase and supported the hierarchical model. No doubt that a particle system may display spin-glass-like behaviors [2]. However, in our opinion, the experiments in Ref. [1] cannot evidence the existence of spin-glass-like phase at all. We will demonstrate below that all the phenomena in Ref. [1] can be observed in a non-interacting particle system with a size distribution. Numerical simulations of our experiments also display the same features.Comment: A comment on "Phys. Rev. Lett. 91, 167206

    Fast Monte Carlo Simulation for Patient-specific CT/CBCT Imaging Dose Calculation

    Full text link
    Recently, X-ray imaging dose from computed tomography (CT) or cone beam CT (CBCT) scans has become a serious concern. Patient-specific imaging dose calculation has been proposed for the purpose of dose management. While Monte Carlo (MC) dose calculation can be quite accurate for this purpose, it suffers from low computational efficiency. In response to this problem, we have successfully developed a MC dose calculation package, gCTD, on GPU architecture under the NVIDIA CUDA platform for fast and accurate estimation of the x-ray imaging dose received by a patient during a CT or CBCT scan. Techniques have been developed particularly for the GPU architecture to achieve high computational efficiency. Dose calculations using CBCT scanning geometry in a homogeneous water phantom and a heterogeneous Zubal head phantom have shown good agreement between gCTD and EGSnrc, indicating the accuracy of our code. In terms of improved efficiency, it is found that gCTD attains a speed-up of ~400 times in the homogeneous water phantom and ~76.6 times in the Zubal phantom compared to EGSnrc. As for absolute computation time, imaging dose calculation for the Zubal phantom can be accomplished in ~17 sec with the average relative standard deviation of 0.4%. Though our gCTD code has been developed and tested in the context of CBCT scans, with simple modification of geometry it can be used for assessing imaging dose in CT scans as well.Comment: 18 pages, 7 figures, and 1 tabl

    Risk, cohabitation and marriage

    Get PDF
    This paper introduces imperfect information,learning,and risk aversion in a two sided matching model.The modelprovides a theoreticalframework for the com- monly occurring phenomenon of cohabitation followed by marriage,and is con- sistent with empirical findings on these institutions.The paper has three major results.First,individuals set higher standards for marriage than for cohabitation. When the true worth of a cohabiting partner is revealed,some cohabiting unions are converted into marriage while others are not.Second,individuals cohabit within classes.Third,the premium that compensates individuals for the higher risk involved in marriage over a cohabiting partnership is derived.This premium can be decomposed into two parts.The first part is a function of the individual ā€™s level of risk aversion,while the second part is a function of the di difference in risk between marriage and cohabitation.

    BioNessie - a grid enabled biochemical networks simulation environment

    Get PDF
    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scale simulations
    • ā€¦
    corecore