1,199 research outputs found

    Modélisation et simulation de la crise d'ébullition dans les REP à l'échelle CFD

    Get PDF
    Dans un RĂ©acteur Ă  Eau PressurisĂ©e (REP), la chaleur dĂ©gagĂ©e par le combustible nuclĂ©aire est transfĂ©rĂ©e Ă  l’eau du circuit primaire, pressurisĂ©e Ă  150 bars pour Ă©viter son Ă©bullition. Cependant, en situation accidentelle, elle peut entrer en rĂ©gime d’ébullition nuclĂ©Ă©e pouvant s’intensifier jusqu’à atteindre la crise d’ébullition. Ce point de transition quasi instantanĂ© entre l’ébullition nuclĂ©Ă©e et l’ébullition en film entraĂźne la formation d’une couche de vapeur stable sur les crayons combustible, associĂ©e Ă  une forte augmentation de leur tempĂ©rature pariĂ©tale crĂ©ant un risque de rupture de leur gaine. La prĂ©diction du flux critique (flux de chaleur auquel se produit la crise d’ébullition) reprĂ©sente donc un enjeu de sĂ»retĂ© majeur et est actuellement rĂ©alisĂ©e Ă  l’aide de corrĂ©lations expĂ©rimentales spĂ©cifiques Ă  une configuration, n’incluant pas de reprĂ©sentation fine de la physique de l’ébullition. Cette thĂšse s’intĂ©resse Ă  la modĂ©lisation de la physique de l’ébullition Ă  l’échelle locale dite « CFD » (Computational Fluid Dynamics), Ă  laquelle il est possible de rĂ©aliser des simulations d’écoulements bouillants avec une discrĂ©tisation spatiale de l’ordre du millimĂštre. Le code maison NEPTUNE_CFD, proposant une description eulĂ©rienne des Ă©coulements multiphasiques Ă  changement de phase, est l’outil de rĂ©fĂ©rence de EDF R&D pour enquĂȘter sur ces problĂ©matiques aux Ă©chelles locales. Dans un premier temps, des simulations d’écoulements bouillants convectifs en tube vertical sont rĂ©alisĂ©es avec NEPTUNE_CFD. Des comparaisons avec l’expĂ©rience DEBORA (Ă©coulement bouillant de rĂ©frigĂ©rant R12 en similitude REP sur plusieurs adimensionnels) ont permis une Ă©valuation du code dans des conditions similaires au cas industriel. Les rĂ©sultats obtenus sont globalement en accord avec l’expĂ©rience, mais prĂ©sentent des Ă©carts notables sur le diamĂštre des bulles et la tempĂ©rature paroi. Cette derniĂšre est calculĂ©e au travers du modĂšle d’ébullition en paroi de NEPTUNE_CFD dit Ă  « Partition du Flux PariĂ©tal » (Heat Flux Partitioning), oĂč le flux appliquĂ© est dĂ©coupĂ© entre plusieurs mĂ©canismes de transfert de chaleur (convection, Ă©vaporation, conduction instationnaire, etc.). Le cƓur des travaux de thĂšse a alors consistĂ© en la construction d’un nouveau modĂšle de Partition du Flux, avec objectif une prise en compte plus fine de la phĂ©nomĂ©nologie de l’ébullition en considĂ©rant notamment le glissement des bulles. Une modĂ©lisation de la dynamique des bulles en paroi a Ă©tĂ© dĂ©veloppĂ©e par une approche mĂ©caniste dĂ©crivant les forces appliquĂ©es sur la bulle. Les formulations de certaines forces (masse ajoutĂ©e, traĂźnĂ©e, etc.) ont Ă©tĂ© rĂ©Ă©valuĂ©es et permettent une prĂ©diction satisfaisante des diamĂštres de dĂ©tachement et des vitesses de glissement Ă  basse et haute pression. Le modĂšle de Partition du Flux a Ă©tĂ© complĂ©tĂ© par une Ă©valuation des nombreuses lois de fermetures requises (temps d’attente, densitĂ© de sites de nuclĂ©ation, etc.) par comparaison avec des mesures expĂ©rimentales tirĂ©es de la littĂ©rature. Le nouveau modĂšle ainsi dĂ©veloppĂ© a ensuite Ă©tĂ© validĂ© par comparaison avec des mesures de tempĂ©rature de paroi et implĂ©mentĂ© dans NEPTUNE_CFD. La prĂ©diction du flux critique s’ancre en perspective de ces dĂ©veloppements. Des observations expĂ©rimentales rĂ©centes dĂ©crivent la crise d’ébullition Ă  l’aide de paramĂštres physiques inclus dans le modĂšle de Partition du Flux. Un critĂšre basĂ© sur la proportion de surface occupĂ©e par les bulles a Ă©tĂ© testĂ© avec l’ancien modĂšle de NEPTUNE_CFD et semble proposer un comportement qualitativement cohĂ©rent. Enfin, on s’intĂ©resse Ă  une configuration de type tube avec des ailettes de mĂ©lange similaires Ă  celles prĂ©sentes en cƓur de REP. Les simulations NEPTUNE_CFD montrent des Ă©carts significatifs Ă  l’expĂ©rience sur la prĂ©diction du taux de vide Ă  coeur. Des simulations monophasiques montrent une surestimation de la rotation du liquide, pouvant expliquer la trop grande accumulation de vapeur dans le cas bouillant

    Crack Detection with Lamb Wave Wavenumber Analysis

    Get PDF
    In this work, we present our study of Lamb wave crack detection using wavenumber analysis. The aim is to demonstrate the application of wavenumber analysis to 3D Lamb wave data to enable damage detection. The 3D wavefields (including vx, vy and vz components) in time-space domain contain a wealth of information regarding the propagating waves in a damaged plate. For crack detection, three wavenumber analysis techniques are used: (i) two dimensional Fourier transform (2D-FT) which can transform the time-space wavefield into frequency-wavenumber representation while losing the spatial information; (ii) short space 2D-FT which can obtain the frequency-wavenumber spectra at various spatial locations, resulting in a space-frequency-wavenumber representation; (iii) local wavenumber analysis which can provide the distribution of the effective wavenumbers at different locations. All of these concepts are demonstrated through a numerical simulation example of an aluminum plate with a crack. The 3D elastodynamic finite integration technique (EFIT) was used to obtain the 3D wavefields, of which the vz (out-of-plane) wave component is compared with the experimental measurement obtained from a scanning laser Doppler vibrometer (SLDV) for verification purposes. The experimental and simulated results are found to be in close agreement. The application of wavenumber analysis on 3D EFIT simulation data shows the effectiveness of the analysis for crack detection. Keywords: : Lamb wave, crack detection, wavenumber analysis, EFIT modelin

    Setting the bar for set-valued attributes

    Full text link

    Some approaches to representing sound with colour and shape

    Get PDF
    In recent times much of the practice of musical notation and representation has begun a gradual migration away from the monochrome standard that existed since the emergence of printed Non-Western music in the 16th century, towards the full colour pallet afforded by modern printers and computer screens. This move has expanded the possibilities available for the representation of information in the musical score. Such an expansion is arguably necessitated by the growth of new musical techniques favouring musical phenomena that were previously poorly captured by traditional Western musical notation. As time-critical form of visualisation there is a strong imperative for the musical score to employ symbols that signify sonic events and the method of their execution with maximal efficiency. One important goal in such efficiency is “semantic soundness”: the degree to which graphical representations makes inherent sense to the reader. This paper explores the implications of recent research into cross-modal colour-to-sound and shape-to sound mappings for the application of colour and shape in musical scores. The paper also revisits Simon Emmerson’s Super-Score concept as a means to accommodate multiple synchronised forms of sonic representation (the spectrogram and spectral descriptors for example) together with alternative notational approaches (gestural, action-based and graphical for example) in a single digital document

    Cellulose Nanocrystal Liquid Crystal Phases: Progress and Challenges in Characterization Using Rheology Coupled to Optics, Scattering, and Spectroscopy

    Get PDF
    Cellulose nanocrystals (CNCs) self-assemble and can be flow-assembled to liquid crystalline orders in a water suspension. The orders range from nano- to macroscale with the contributions of individual crystals, their micron clusters, and macroscopic assemblies. The resulting hierarchies are optically active materials that exhibit iridescence, reflectance, and light transmission. Although these assemblies have the potential for future renewable materials, details about structures on different hierarchical levels that span from the nano- to the macroscale are still not unraveled. Rheological characterization is essential for investigating flow properties; however, bulk material properties make it difficult to capture the various length-scales during assembly of the suspensions, for example, in simple shear flow. Rheometry is combined with other characterization methods to allow direct analysis of the structure development in the individual hierarchical levels. While optical techniques, scattering, and spectroscopy are often used to complement rheological observations, coupling them in situ to allow simultaneous observation is paramount to fully understand the details of CNC assembly from liquid to solid. This Review provides an overview of achievements in the coupled analytics, as well as our current opinion about opportunities to unravel the structural distinctiveness of cellulose nanomaterials

    Cross Dynamic Range And Cross Resolution Objective Image Quality Assessment With Applications

    Get PDF
    In recent years, image and video signals have become an indispensable part of human life. There has been an increasing demand for high quality image and video products and services. To monitor, maintain and enhance image and video quality objective image and video quality assessment tools play crucial roles in a wide range of applications throughout the field of image and video processing, including image and video acquisition, communication, interpolation, retrieval, and displaying. A number of objective image and video quality measures have been introduced in the last decades such as mean square error (MSE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM). However, they are not applicable when the dynamic range or spatial resolution of images being compared is different from that of the corresponding reference images. In this thesis, we aim to tackle these two main problems in the field of image quality assessment. Tone mapping operators (TMOs) that convert high dynamic range (HDR) to low dynamic range (LDR) images provide practically useful tools for the visualization of HDR images on standard LDR displays. Most TMOs have been designed in the absence of a well-established and subject-validated image quality assessment (IQA) model, without which fair comparisons and further improvement are difficult. We propose an objective quality assessment algorithm for tone-mapped images using HDR images as references by combining 1) a multi-scale signal fidelity measure based on a modified structural similarity (SSIM) index; and 2) a naturalness measure based on intensity statistics of natural images. To evaluate the proposed Tone-Mapped image Quality Index (TMQI), its performance in several applications and optimization problems is provided. Specifically, the main component of TMQI known as structural fidelity is modified and adopted to enhance the visualization of HDR medical images on standard displays. Moreover, a substantially different approach to design TMOs is presented, where instead of using any pre-defined systematic computational structure (such as image transformation or contrast/edge enhancement) for tone-mapping, we navigate in the space of all LDR images, searching for the image that maximizes structural fidelity or TMQI. There has been an increasing number of image interpolation and image super-resolution (SR) algorithms proposed recently to create images with higher spatial resolution from low-resolution (LR) images. However, the evaluation of such SR and interpolation algorithms is cumbersome. Most existing image quality measures are not applicable because LR and resultant high resolution (HR) images have different spatial resolutions. We make one of the first attempts to develop objective quality assessment methods to compare LR and HR images. Our method adopts a framework based on natural scene statistics (NSS) where image quality degradation is gauged by the deviation of its statistical features from NSS models trained upon high quality natural images. In particular, we extract frequency energy falloff, dominant orientation and spatial continuity statistics from natural images and build statistical models to describe such statistics. These models are then used to measure statistical naturalness of interpolated images. We carried out subjective tests to validate our approach, which also demonstrates promising results. The performance of the proposed measure is further evaluated when applied to parameter tuning in image interpolation algorithms
    • 

    corecore