39 research outputs found

    Predicting Students\u27 Interests in Energy-Related Careers: The Importance of High School Exposure

    Get PDF
    The purpose of my project is the following, are the number of individuals that choose to address energy (supply or demand) or environmental degradation statistically significant with the amount of energy lectures they were exposed to in high school courses? In my project I am connecting energy (supply or demand) and environmental degradation with educational reform in an effort to; minimize social hysteria, augment the quantity of young adults entering the energy field, and fight for the United States to rise in the ranks. An article written by Jon Palfreman supports my argument of this social hysteria and how the “Opposition to nuclear energy is on irrational fear fed by Hollywood-style fiction, the Green lobbies and the media”. Jon’s publication “A Tale of Two Fears” will assist me in proving that there is a nuclear hysteria. Furthermore, as a Nuclear Regulatory Commission Scholar I can honestly say that the technology and intelligence is present in the energy field. But what it needs is popularity. I will be using data that was collected from a survey called SaGE that is titled (Sustainability and Gender in Engineering). I will utilize a coding program called “RStudio”, with the guidance of my mentor Geoff Potvin; I will pursue statistically significant trends using binary logistic regression models that agree/disagree with my hypothesis. I expect that the first problem will be that there will be no immediate significance between exposure to energy discussions in high school and young adults selecting to address energy (supply/demand) in their careers. However, this is where I will control for certain factors (i.e. GPA) in “RStudio”. Lastly, I will be using binary logistic regression in “RStudio” to prove the significance of energy related discussions which will provide evidence for education reform to increase occurrence of energy-related discussions in high-school curriculum

    Predicting Critical Heat Flux With Multiphase CFD: 4 Years in the Making

    Get PDF
    Advancement in the experimental techniques have brought new insights into the microscale boiling phenomena, and provide the base for a new physical interpretation of flow boiling heat transfer. A new modeling framework in Computational Fluid Dynamics has been assembled at MIT, and aims at introducing all necessary mechanisms, and explicitly tracks: (1) the size and dynamics of the bubbles on the surface; (2) the amount of microlayer and dry area under each bubble; (3) the amount of surface area influenced by sliding bubbles; (4) the quenching of the boiling surface following a bubble departure and (5) the statistical bubble interaction on the surface. The preliminary assessment of the new framework is used to further extend the portability of the model through an improved formulation of the force balance models for bubble departure and lift-off. Starting from this improved representation at the wall, the work concentrates on the bubble dynamics and dry spot quantification on the heated surface, which governs the Critical Heat Flux (CHF) limit. A new proposition is brought forward, where Critical Heat Flux is a natural limiting condition for the heat flux partitioning on the boiling surface. The first principle based CHF is qualitatively demonstrated, and has the potential to deliver a radically new simulation technique to support the design of advanced heat transfer systems.United States. Department of Energy. Consortium for Advanced Simulation of Light Water Reactor

    STRUCT: A Second-Generation URANS Approach for Effective Design of Advanced Systems

    Get PDF
    This work presents the recently developed STRUCT hybrid turbulence model and assesses its potential to address the poor grid consistency and limited engineering applicability typical of hybrid models. Renouncing the ability to consistently bridge RANS, LES and DNS based on the computational grid size, we aim at addressing the engineering design needs with a different mindset. We opt to leverage the robustness and computational efficiency of URANS in all nearly homogeneous flow regions while extending it to locally resolve complex flow structures, where the concept of Reynolds averaging is poorly applicable. The proposed approach is best characterized as a second generation URANS closure, which triggers controlled resolution of turbulence inside selected flow regions. The resolution is controlled by a single-point parameter representing the turbulent timescale separation, which quantitatively identifies topological flow structures of interest. The STRUCT approach demonstrates LES-like capabilities on much coarser grids, and consistently increases the accuracy of the predictions from the baseline URANS at increasing grid finesse. The encouraging results show the potential to support effective design application through resolution of complex flow structures while controlling the computational cost. The ultimate objective is to continue improving the robustness and computational efficiency while further assessing the accuracy and range of applicability

    Multiphase turbulence mechanisms identification from consistent analysis of direct numerical simulation data

    Get PDF
    Direct Numerical Simulation (DNS) serves as an irreplaceable tool to probe the complexities of multiphase flow and identify turbulent mechanisms that elude conventional experimental measurement techniques. The insights unlocked via its careful analysis can be used to guide the formulation and development of turbulence models used in multiphase computational fluid dynamics simulations of nuclear reactor applications. Here, we perform statistical analyses of DNS bubbly flow data generated by Bolotnov (Reτ= 400) and Lu–Tryggvason (Reτ= 150), examining single-point statistics of mean and turbulent liquid properties, turbulent kinetic energy budgets, and two-point correlations in space and time. Deformability of the bubble interface is shown to have a dramatic impact on the liquid turbulent stresses and energy budgets. A reduction in temporal and spatial correlations for the streamwise turbulent stress (uu) is also observed at wall-normal distances of y+= 15, y/ÎŽ = 0.5, and y/ÎŽ = 1.0. These observations motivate the need for adaptation of length and time scales for bubble-induced turbulence models and serve as guidelines for future analyses of DNS bubbly flow data. Keywords: Budget Equations, Bubble-Induced Turbulence, DNS, M&C2017, Multiphase CFDUnited States. Department of Energy. Naval Reactors Division (Rickover Fellowship Program in Nuclear Engineering

    CFD ANALYSES OF THE TN-24P PWR SPENT FUEL STORAGE CASK

    Get PDF
    ABSTRACT Dry storage casks are used to store spent nuclear fuel after removal from the reactor spent fuel pool. Even prior to the Fukushima earthquake of March 2011, dry storage of spent fuel was receiving increased attention as many reactor spent fuel pools near their capacity. Many different types of cask designs are used, and one representative design is the TN-24P spent fuel cask, a nonventilated steel cask with a shielded exterior shell and lid. The cask is typically filled with an inert gas such as helium, argon or nitrogen. In this paper, Computational Fluid Dynamics (CFD) calculation results for the thermal performance of the TN-24P cask using the commercial CFD software STAR-CCM+ are presented. Initial calculations employ a common approach of treating the fuel assemblies as conducting porous media with calibrated volume-averaged properties, and comparison to existing measured temperature data shows good agreement. One of the fuel assemblies is then replaced with a more accurate representation that includes the full geometric detail of the fuel rods, guide tubes, spacer grids and end fittings (flow nozzles), and the results shown are consistent with the initial analysis, but without the need for the assumptions inherent in the porous media approach. This hybrid modeling approach also permits the direct determination of important results, such as the precise location of peak fuel cladding temperatures (PCTs), which is not possible using the more traditional porous media approach

    Acrylamide and glycidamide hemoglobin adducts and epithelial ovarian cancer: a nested case-control study in nonsmoking postmenopausal women from the EPIC cohort

    Get PDF
    Background: Acrylamide was classified as 'probably carcinogenic to humans (group 2A)' by the International Agency for Research on Cancer. Epithelial ovarian cancer (EOC) is the fourth cause of cancer mortality in women. Five epidemiological studies have evaluated the association between EOC risk and dietary acrylamide intake assessed using food frequency questionnaires, and one nested case-control study evaluated hemoglobin adducts of acrylamide (HbAA) and its metabolite glycidamide (HbGA) and EOC risk; the results of these studies were inconsistent. Methods: A nested case-control study in nonsmoking postmenopausal women (334 cases, 417 controls) was conducted within the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort. Unconditional logistic regression models were used to estimate ORs and 95% confidence intervals (CI) for the association between HbAA, HbGA, HbAA+HbGA, and HbGA/HbAA and EOC and invasive serous EOC risk. Results: No overall associations were observed between biomarkers of acrylamide exposure analyzed in quintiles and EOC risk; however, positive associations were observed between some middle quintiles of HbGA and HbAA+HbGA. Elevated but non-statistically significant ORs for serous EOC were observed for HbGA and HbAA+HbGA (ORQ5vsQ1, 1.91; 95% CI, 0.96-3.81 and ORQ5vsQ1, 1.90; 95% CI, 0.94-3.83, respectively); however, no linear dose-response trends were observed. Conclusion: This EPIC nested case-control study failed to observe a clear association between biomarkers of acrylamide exposure and the risk of EOC or invasive serous EOC. Impact: It is unlikely that dietary acrylamide exposure increases ovarian cancer risk; however, additional studies with larger sample size should be performed to exclude any possible association with EOC risk

    Circulating Osteopontin and Prediction of Hepatocellular Carcinoma Development in a Large European Population.

    Get PDF
    We previously identified osteopontin (OPN) as a promising marker for the early detection of hepatocellular carcinoma (HCC). In this study, we investigated the association between prediagnostic circulating OPN levels and HCC incidence in a large population-based cohort. A nested case-control study was conducted within the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort. During a mean follow-up of 4.8 years, 100 HCC cases were identified. Each case was matched to two controls and OPN levels were measured in baseline plasma samples. Viral hepatitis, liver function, and α-fetoprotein (AFP) tests were also conducted. Conditional logistic regression models were used to calculate multivariable odds ratio (OR) and 95% confidence intervals (95% CI) for OPN levels in relation to HCC. Receiver operating characteristics curves were constructed to determine the discriminatory accuracy of OPN alone or in combination with other liver biomarkers in the prediction of HCC. OPN levels were positively associated with HCC risk (per 10% increment, ORmultivariable = 1.30; 95% CI, 1.14-1.48). The association was stronger among cases diagnosed within 2 years of follow-up. Adding liver function tests to OPN improved the discriminatory performance for subjects who developed HCC (AUC = 0.86). For cases diagnosed within 2 years, the combination of OPN and AFP was best able to predict HCC risk (AUC = 0.88). The best predictive model for HCC in this low-risk population is OPN in combination with liver function tests. Within 2 years of diagnosis, the combination of OPN and AFP best predicted HCC development, suggesting that measuring OPN and AFP could identify high-risk groups independently of a liver disease diagnosis. Cancer Prev Res; 9(9); 758-65. ©2016 AACR.This work was supported by NIH R01 CA120719 to LB and by the French National Cancer Institute (Institut National du Cancer; INCA) grant number 2009-139 to MJ. The coordination of EPIC is financially supported by the European Commission (DG-SANCO); and the International Agency for Research on Cancer. The national cohorts are supported by Danish Cancer Society (Denmark); Ligue Contre le Cancer; Institut Gustave Roussy; Mutuelle GĂ©nĂ©rale de l’Education Nationale; and Institut National de la SantĂ© et de la Recherche MĂ©dicale (INSERM) (France); Deutsche Krebshilfe, Deutsches Krebsforschungszentrum (DKFZ); and Federal Ministry of Education and Research (Germany); Hellenic Health Foundation (Greece); Italian Association for Research on Cancer (AIRC); National Research Council; and AIRE-ONLUS Ragusa, AVIS Ragusa, Sicilian Government (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS); Netherlands Cancer Registry (NKR); LK Research Funds; Dutch Prevention Funds; Dutch ZON (Zorg Onderzoek Nederland); World Cancer Research Fund (WCRF); and Statistics Netherlands (the Netherlands); European Research Council (ERC) (grant number ERC-2009-AdG 232997) and Nordforsk; and Nordic Center of Excellence Programme on Food, Nutrition and Health (Norway); Health Research Fund (FIS); Regional Governments of AndalucĂ­a, Asturias, Basque Country, Murcia (No. 6236) and Navarra; and ISCIII RETIC (RD06/0020) (Spain); Swedish Cancer Society; Swedish Scientific Council; and Regional Government of SkĂ„ne and VĂ€sterbotten (Sweden); Cancer Research UK; Medical Research Council; Stroke Association; British Heart Foundation; Department of Health; Food Standards Agency; and Wellcome Trust (UK). Reagents for the hepatitis infection determinations were kindly provided by Abbott Diagnostics Division, Lyon, France.This is the author accepted manuscript. The final version is available from the American Association for Cancer Research via http://dx.doi.org/10.1158/1940-6207.CAPR-15-043

    Assessing the Applicability of the Structure-Based Turbulence Resolution Approach to Nuclear Safety-Related Issues

    No full text
    The accuracy of computational fluid dynamics (CFD) predictions plays a fundamental role in supporting the operation of the current nuclear reactor fleet, and even more importantly the licensing of advanced high-efficiency reactor concepts, where local temperature oscillations driven by thermal striping, cycling and stratification can limit the structural performance of vessels and components. The complexity of the geometrical configurations, coupled to the long operational transients, inhibits the adoption of large eddy simulation (LES) methods, mandating the acceptance of the more efficient Reynolds-averaged Navier-Stokes (RANS)-based models, even though they are unable to provide a complete physical description of the flow in regions dominated by complex unsteady coherent structures. A new strategy has been proposed and demonstrated at Massachusetts Institute of Technology (MIT) toward the enhancement of unsteady Reynolds-averaged Navier-Stokes (URANS) predictions, using local resolution of coherent turbulence, to provide higher fidelity modeling in support of safety-related issues. In this paper, a comprehensive assessment of the recently proposed Structure-based (STRUCT-Δ) turbulence model is presented, starting from fundamental validation of the model capabilities and later focusing on a representative safety-relevant application, i.e., thermal mixing in a T-junction. Solutions of STRUCT-Δ, the widely used Realizable k−Δ model (RKE) and Large Eddy Simulation with Wall-Adapting Local Eddy-viscosity subgrid scale closure (LES-WALE) are compared against the experimental data. Both the velocity and temperature fields predicted by the STRUCT-Δ model are in close agreement with the high-fidelity data from the experiments and reference LES solutions, across all validation cases. The approach demonstrates the potential to address the accuracy requirements for application to nuclear safety-related issues, by resolving the turbulent flow structures, while the computational efficiency provides the ability to perform consistent uncertainty quantification
    corecore