223 research outputs found

    Dispersion and deposition of heavy particles in turbulent flows

    Get PDF
    PhD ThesisFor nearly 40 years, engineers, researchers and scientists from the nuclear industry across the World have been trying to understand the behaviors of deposition, bounce and re-suspension of heavy, radioactive particles suspended as a dilute secondary phase in the cooling circuits of primary reactor systems. The aim is to understand the mechanism of transport and deposition of such particles through large, complex geometry systems, so that the risk of dispersal of radioactive particles may be assessed, and confirmed to be acceptably small both in closed containers and in the atmosphere in the case of an accident scenario. The first part of the present work addresses the challenge of robustly and efficiently predicting the behaviors of rigid and spherical particles (referred to as heavy particles) within turbulent boundary layers, the underlying physics of which is the controlling factor on particle deposition in smooth pipes and ducts. In the second component of work we study the deposition and bounce of heavy particles suspended in turbulent flows across heat exchanger tube banks, using Large Eddy Simulation (LES). It was originally proposed to extend the boundary layer work to this application, but it was quickly identified that the deposition mechanisms here are governed by the high core flow turbulence, rather than boundary layer phenomena, so that LES provides the only realistic modelling approach. In both cases the dispersed heavy particles are expressed in a Lagrangian framework solved in an independently developed large-scale parallel code; whilst the fluid phase is described in an Eulerian framework, either based on correlations from published Direct Numerical Simulation (DNS) for the boundary layer models, or from Computational Fluid Dynamics (CFD) simulations for both the boundary layer and tube-bank models, making use of the unstructured-grid based Navier-Stokes solver ANSYS FLUENT. Underpinning this work we implement a complete stochastic Lagrangian particle tracking module, based on a robust and efficient particle localization algorithm which can determine and update the cell containing each particle as the particles move through an unstructured finite volume grid overlying the flow domain. The module can handle correctly the interactions of particles with complex boundaries, and uses a novel numerical scheme for interpolating the carrier-phase velocity field seen by the particles from cell-centred values obtained from CFD computation. It implements a Gear three-level implicit scheme to compute the particle velocity, which is more robust, accurate and efficient than the conventional explicit and implicit schemes. The module has been fully parallelized using MPI (Message Passing Interface) settings on a Linux cluster consisting of 20 single CPU node, and further been successfully integrated with both the steady and unsteady ANSYS FLUENT solvers, complete replacing the built-in Lagrangian particle tracking model provided by ANSYS FLUENT. The algorithm and numerical schemes have been validated against analytical solutions of particle transport in a two-dimensional straining shear flow and other cases. For turbulent boundary layer flows, a simpler but more promising stochastic quadrant model, inspired by the discrete random walk model of Kallio and Reeks and the quadrant analysis of Wu and Willmarth, is developed in order to account for the effects of near wall large-scale coherent structures, e.g. sweeps and ejections, on particle transport. The input parameters for the stochastic quadrant model are educed from the corresponding statistics obtained from a Large Eddy Simulation (LES) of a fully developed channel flow. The model is applied to the prediction of deposition of heavy particles in a turbulent boundary layer; both using a Kallio and Reeks correlation based model of the flow, and also a Reynolds-Averaged Navier-Stokes (RANS) flow solution of using ANSYS FLUENT, the latter flow model having the potential to be extended to complex duct geometries. These solutions are compared to those of by solving an alternative Langevin equation of Dehbi, or continuous random walk model, which satisfies the fully mixed condition and describes the fluid velocity fluctuations seen by heavy particles. Prior to the current work no systematic investigation of the potential errors in particle deposition in turbulent boundary layers due to the modified hydrodynamic forces experienced by particles when very close to the wall has been carried out, possibly because of the complexity of the correlations involved. The effect is explored with the present stochastic quadrant model, using recently published composite correlations of Zeng and Balachandar for the particle drag coefficient CD and lift coefficient CL for near wall particles. This work provides an important first confirmation that for practical cases hydrodynamic effects can reasonably be neglected for particle deposition in turbulent boundary layers. The boundary layer methods developed in the first part of this thesis are applicable to the prediction of heavy particle deposition in fairly complex duct geometries, but are shown to be inappropriate for flow over tube-banks, where the boundary layers are no longer the rate limiting feature. Consequently the parallel Lagrangian stochastic particle tracking model is extended to study the particle impaction efficiency on tube banks in a turbulent flow in the framework of Large Eddy Simulation (LES). The flow field, obtained from Large Eddy Simulation with the dynamic Smagorinsky sub-grid scale model within ANSYS FLUENT, is fully validated against existing experimental data. As far as the dispersed particle phase is concerned, the energy losses when particles impact on and generally, but not always, rebound from cylinders within the tube-bank is taken into account using an empirical critical-impact velocity model. The efficiency of particle impaction is measured for particles of three Stokes number, and the results are compared with existing experimental data.British Energy (Part of EDF

    Fluid flow in a porous medium with transverse permeability discontinuity

    Get PDF
    Magnetic Resonance Imaging (MRI) velocimetry methods were used to study fully developed axially symmetric fluid flow in a model porous medium of cylindrical symmetry with a transverse permeability discontinuity. Spatial mapping of fluid ow resulted in radial velocity profiles. High spatial resolution of these profiles allowed the estimating of the slip in velocities at the boundary with a permeability discontinuity zone in a sample. The profiles were compared to theoretical velocity fields for a fully developed axially symmetric flow in a cylinder derived from the Joseph and Beavers and the Brinkman models. Velocity fields were also computed using pore-scale lattice Boltzmann Modelling (LBM) where the assumption about the boundary could be omitted. Both approaches gave a good agreement between theory and experiment though LBM velocity fields followed experiment more closely. This work shows great promise for MRI velocimetry methods in addressing the boundary behavior of fluids in opaque heterogeneous porous media

    Temporal Consistency Learning of inter-frames for Video Super-Resolution

    Full text link
    Video super-resolution (VSR) is a task that aims to reconstruct high-resolution (HR) frames from the low-resolution (LR) reference frame and multiple neighboring frames. The vital operation is to utilize the relative misaligned frames for the current frame reconstruction and preserve the consistency of the results. Existing methods generally explore information propagation and frame alignment to improve the performance of VSR. However, few studies focus on the temporal consistency of inter-frames. In this paper, we propose a Temporal Consistency learning Network (TCNet) for VSR in an end-to-end manner, to enhance the consistency of the reconstructed videos. A spatio-temporal stability module is designed to learn the self-alignment from inter-frames. Especially, the correlative matching is employed to exploit the spatial dependency from each frame to maintain structural stability. Moreover, a self-attention mechanism is utilized to learn the temporal correspondence to implement an adaptive warping operation for temporal consistency among multi-frames. Besides, a hybrid recurrent architecture is designed to leverage short-term and long-term information. We further present a progressive fusion module to perform a multistage fusion of spatio-temporal features. And the final reconstructed frames are refined by these fused features. Objective and subjective results of various experiments demonstrate that TCNet has superior performance on different benchmark datasets, compared to several state-of-the-art methods.Comment: Accepted by IEEE Trans. Circuits Syst. Video Techno

    Removing Batch Effects in Analysis of Expression Microarray Data: An Evaluation of Six Batch Adjustment Methods

    Get PDF
    The expression microarray is a frequently used approach to study gene expression on a genome-wide scale. However, the data produced by the thousands of microarray studies published annually are confounded by “batch effects,” the systematic error introduced when samples are processed in multiple batches. Although batch effects can be reduced by careful experimental design, they cannot be eliminated unless the whole study is done in a single batch. A number of programs are now available to adjust microarray data for batch effects prior to analysis. We systematically evaluated six of these programs using multiple measures of precision, accuracy and overall performance. ComBat, an Empirical Bayes method, outperformed the other five programs by most metrics. We also showed that it is essential to standardize expression data at the probe level when testing for correlation of expression profiles, due to a sizeable probe effect in microarray data that can inflate the correlation among replicates and unrelated samples

    β-Cell Specific Overexpression of GPR39 Protects against Streptozotocin-Induced Hyperglycemia

    Get PDF
    Mice deficient in the zinc-sensor GPR39, which has been demonstrated to protect cells against endoplasmatic stress and cell death in vitro, display moderate glucose intolerance and impaired glucose-induced insulin secretion. Here, we use the Tet-On system under the control of the proinsulin promoter to selectively overexpress GPR39 in the β cells in a double transgenic mouse strain and challenge them with multiple low doses of streptozotocin, which in the wild-type littermates leads to a gradual increase in nonfasting glucose levels and glucose intolerance observed during both food intake and OGTT. Although the overexpression of the constitutively active GPR39 receptor in animals not treated with streptozotocin appeared by itself to impair the glucose tolerance slightly and to decrease the β-cell mass, it nevertheless totally protected against the gradual hyperglycemia in the steptozotocin-treated animals. It is concluded that GPR39 functions in a β-cell protective manner and it is suggested that it is involved in some of the beneficial, β-cell protective effects observed for Zn++ and that GPR39 may be a target for antidiabetic drug intervention

    Mining disease genes using integrated protein–protein interaction and gene–gene co-regulation information

    Get PDF
    AbstractIn humans, despite the rapid increase in disease-associated gene discovery, a large proportion of disease-associated genes are still unknown. Many network-based approaches have been used to prioritize disease genes. Many networks, such as the protein–protein interaction (PPI), KEGG, and gene co-expression networks, have been used. Expression quantitative trait loci (eQTLs) have been successfully applied for the determination of genes associated with several diseases. In this study, we constructed an eQTL-based gene–gene co-regulation network (GGCRN) and used it to mine for disease genes. We adopted the random walk with restart (RWR) algorithm to mine for genes associated with Alzheimer disease. Compared to the Human Protein Reference Database (HPRD) PPI network alone, the integrated HPRD PPI and GGCRN networks provided faster convergence and revealed new disease-related genes. Therefore, using the RWR algorithm for integrated PPI and GGCRN is an effective method for disease-associated gene mining
    corecore