33 research outputs found

    Assessment of Models of Chemically Reacting Granular Flows

    Get PDF
    A report presents an assessment of a general mathematical model of dense, chemically reacting granular flows like those in fluidized beds used to pyrolize biomass. The model incorporates submodels that have been described in several NASA Tech Briefs articles, including "Generalized Mathematical Model of Pyrolysis of Biomass" (NPO-20068) NASA Tech Briefs, Vol. 22, No. 2 (February 1998), page 60; "Model of Pyrolysis of Biomass in a Fluidized-Bed Reactor" (NPO-20708), NASA Tech Briefs, Vol. 25, No. 6 (June 2001), page 59; and "Model of Fluidized Bed Containing Reacting Solids and Gases" (NPO- 30163), which appears elsewhere in this issue. The model was used to perform computational simulations in a test case of pyrolysis in a reactor containing sand and biomass (i.e., plant material) particles through which passes a flow of hot nitrogen. The boundary conditions and other parameters were selected for the test case to enable assessment of the validity of some assumptions incorporated into submodels of granular stresses, granular thermal conductivity, and heating of particles. The results of the simulation are interpreted as partly affirming the assumptions in some respects and indicating the need for refinements of the assumptions and the affected submodels in other respects

    Numerical Study of Pyrolysis of Biomass in Fluidized Beds

    Get PDF
    A report presents a numerical-simulation study of pyrolysis of biomass in fluidized-bed reactors, performed by use of the mathematical model described in Model of Fluidized Bed Containing Reacting Solids and Gases (NPO-30163), which appears elsewhere in this issue of NASA Tech Briefs. The purpose of the study was to investigate the effect of various operating conditions on the efficiency of production of condensable tar from biomass. The numerical results indicate that for a fixed particle size, the fluidizing-gas temperature is the foremost parameter that affects the tar yield. For the range of fluidizing-gas temperatures investigated, and under the assumption that the pyrolysis rate exceeds the feed rate, the optimum steady-state tar collection was found to occur at 750 K. In cases in which the assumption was not valid, the optimum temperature for tar collection was found to be only slightly higher. Scaling up of the reactor was found to exert a small negative effect on tar collection at the optimal operating temperature. It is also found that slightly better scaling is obtained by use of shallower fluidized beds with greater fluidization velocities

    SENSITIVITY ANALYSIS OF COUPLED CRITICALITY CALCULATIONS

    Get PDF
    ABSTRACT Perturbation theory based sensitivity analysis is a vital part of todays' nuclear reactor design. This paper presents an extension of standard techniques to examine coupled criticality problems with mutual feedback between neutronics and an augmenting system (for example thermal-hydraulics). The proposed procedure uses a neutronic and an augmenting adjoint function to efficiently calculate the first order change in responses of interest due to variations of the parameters describing the coupled problem. The effect of the perturbations is considered in two different ways in our study: either a change is allowed in the power level while maintaining criticality (power perturbation) or a change is allowed in the eigenvalue while the power is constrained (eigenvalue perturbation). The calculated response can be the change in the power level, the reactivity worth of the perturbation, or the change in any functional of the flux, the augmenting dependent variables and the input parameters. To obtain power-and criticality-constrained sensitivities power-and k-reset procedures can be applied yielding identical results. Both the theoretical background and an application to a one dimensional slab problem are presented, along with an iterative procedure to compute the necessary adjoint functions using the neutronics and the augmenting codes separately, thus eliminating the need of developing new programs to solve the coupled adjoint problem

    A probabilistic deep learning model of inter-fraction anatomical variations in radiotherapy

    Get PDF
    In radiotherapy, the internal movement of organs between treatment sessions causes errors in the final radiation dose delivery. Motion models can be used to simulate motion patterns and assess anatomical robustness before delivery. Traditionally, such models are based on principal component analysis (PCA) and are either patient-specific (requiring several scans per patient) or population-based, applying the same deformations to all patients. We present a hybrid approach which, based on population data, allows to predict patient-specific inter-fraction variations for an individual patient. We propose a deep learning probabilistic framework that generates deformation vector fields (DVFs) warping a patient's planning computed tomography (CT) into possible patient-specific anatomies. This daily anatomy model (DAM) uses few random variables capturing groups of correlated movements. Given a new planning CT, DAM estimates the joint distribution over the variables, with each sample from the distribution corresponding to a different deformation. We train our model using dataset of 312 CT pairs from 38 prostate cancer patients. For 2 additional patients (22 CTs), we compute the contour overlap between real and generated images, and compare the sampled and ground truth distributions of volume and center of mass changes. With a DICE score of 0.86 and a distance between prostate contours of 1.09 mm, DAM matches and improves upon PCA-based models. The distribution overlap further indicates that DAM's sampled movements match the range and frequency of clinically observed daily changes on repeat CTs. Conditioned only on a planning CT and contours of a new patient without any pre-processing, DAM can accurately predict CTs seen during following treatment sessions, which can be used for anatomically robust treatment planning and robustness evaluation against inter-fraction anatomical changes

    Robustness analysis of CTV and OAR dose in clinical PBS-PT of neuro-oncological tumors:prescription-dose calibration and inter-patient variation with the Dutch proton robustness evaluation protocol

    Get PDF
    Objective:The Dutch proton robustness evaluation protocol prescribes the dose of the clinical target volume (CTV) to the voxel-wise minimum (VWmin) dose of 28 scenarios. This results in a consistent but conservative near-minimum CTV dose (D98%,CTV). In this study, we analyzed (i) the correlation between VWmin/voxel-wise maximum (VWmax) metrics and actually delivered dose to the CTV and organs at risk (OARs) under the impact of treatment errors, and (ii) the performance of the protocol before and after its calibration with adequate prescription-dose levels.Approach. Twenty-one neuro-oncological patients were included. Polynomial chaos expansion was applied to perform a probabilistic robustness evaluation using 100,000 complete fractionated treatments per patient. Patient-specific scenario distributions of clinically relevant dosimetric parameters for the CTV and OARs were determined and compared to clinical VWmin and VWmax dose metrics for different scenario subsets used in the robustness evaluation protocol.Main results. The inclusion of more geometrical scenarios leads to a significant increase of the conservativism of the protocol in terms of clinical VWmin and VWmax values for the CTV and OARs. The protocol could be calibrated using VWmin dose evaluation levels of 93.0%-92.3%, depending on the scenario subset selected. Despite this calibration of the protocol, robustness recipes for proton therapy showed remaining differences and an increased sensitivity to geometrical random errors compared to photon-based margin recipes.Significance. The Dutch proton robustness evaluation protocol, combined with the photon-based margin recipe, could be calibrated with a VWmin evaluation dose level of 92.5%. However, it shows limitations in predicting robustness in dose, especially for the near-maximum dose metrics to OARs. Consistent robustness recipes could improve proton treatment planning to calibrate residual differences from photon-based assumptions.</p

    A hybrid multi-particle approach to range assessment-based treatment verification in particle therapy

    Get PDF
    Particle therapy (PT) used for cancer treatment can spare healthy tissue and reduce treatment toxicity. However, full exploitation of the dosimetric advantages of PT is not yet possible due to range uncertainties, warranting development of range-monitoring techniques. This study proposes a novel range-monitoring technique introducing the yet unexplored concept of simultaneous detection and imaging of fast neutrons and prompt-gamma rays produced in beam-tissue interactions. A quasimonolithic organic detector array is proposed, and its feasibility for detecting range shifts in the context of proton therapy is explored through Monte Carlo simulations of realistic patient models and detector resolution efects. The results indicate that range shifts of 1 mm can be detected at relatively low proton intensities (22.30(13) 脳 107 protons/spot) when spatial information obtained through imaging of both particle species are used simultaneously. This study lays the foundation for multiparticle detection and imaging systems in the context of range verifcation in PTpublishedVersio

    1D discontinuous Galerkin method code for solving the Stefan problem with the linearized enthalpy approach

    No full text
    &lt;p&gt;This repository features a fortran program (Stevan_DG.f90) for solving the 1D Stefan problem with the linearised enthalpy approach and a discontinuous Galerkin framework, published under the GNU general public license. The repository includes all necessary modules (gauss quad.f90 for performing gaussian quadrature numerical integration and lapack_ops.f90 for performing operations on banded matrices), the Makefile, and the input files (to be modified according to the problem specifications).&nbsp;&lt;/p&gt;&lt;p&gt;Please refer to the following paper when using this software:&nbsp;&lt;/p&gt;&lt;p&gt;Please cite the following paper when using this code:&lt;/p&gt;&lt;p&gt;Kaaks, B.J., Rohde, M., Kloosterman, J.L., Lathouwers, D., 2023.&nbsp; An energy-conservative DG-FEM approach for solid-liquid phase change, published in Numerical Heat Transfer, Part B: Fundamentals (&lt;a href="https://doi.org/10.1080/10407790.2023.2211231"&gt;https://doi.org/10.1080/10407790.2023.2211231)&lt;/a&gt;.&lt;/p&gt

    Numerical dataset belonging to: 'A numerical benchmark for modelling phase change in molten salt reactors'

    No full text
    &lt;p&gt;This folder contains the numerical data belonging to the paper 'A numerical benchmark for modelling phase change in molten salt reactors' (Mateusz Pater, Bouke Kaaks, Bent Lauritzen, Danny Lathouwers), published in Annals of Nuclear Energy in 2023 (&lt;a href="https://doi.org/10.1016/j.anucene.2023.110093"&gt;https://doi.org/10.1016/j.anucene.2023.110093&lt;/a&gt;). Please cite this paper when publishing work referring to or including (part of) this data set. The data set includes all raw simulation data, including the extracted&nbsp; melt fronts and temperature and velocity profiles used for generating the figures in the paper. The format of the data generated with OpenFOAM and Star CCM+ is .csv, which can be opened by excel, a text editor or another suitable data reader. The format of the data generated by DGFlows is .msh for the raw simulation output and .pos for the extracted profiles, both of which may be opened by GMSH (the open source mesh generator and postprocessing tool for FEM simulations), or a text editor.&lt;/p&gt

    A numerical benchmark for modelling phase change in molten salt reactors

    No full text
    The design of a molten salt reactor is largely based on CFD simulations. Phase change plays an important role in the safety of the reactor, but numerical modelling of phase change is particularly challenging. Therefore, the knowledge of the margin of error of CFD simulations involving phase change is very important. Relevant experimental validation data is lacking. For this reason, a numerical benchmark designed after the freeze valve is proposed. The benchmark consists of five stages, where with each step more complexity is added. The stepwise addition of complexity allows for pinpointing potential sources of discrepancy. Results were obtained with three different codes: STAR-CCM+, OpenFOAM, and DGFlows. The results were found to be largely consistent between the codes, however the addition of conjugate heat transfer introduced some discrepancies. These results indicate that careful consideration is needed when coupling conjugate heat transfer solvers with solid鈥搇iquid phase change models
    corecore