414 research outputs found

    Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development

    Get PDF
    In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008

    User Perspective and Analysis of the Continuous-Energy Sensitivity Methods in SCALE 6.2 using TSUNAMI-3D

    Get PDF
    The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) suite within the SCALE code system makes use of eigenvalue sensitivity coefficients to enable several capabilities, such as quantifying the data-induced uncertainty in calculated eigenvalues and assessing the similarity between different critical systems. The TSUNAMI-3D code is one tool within the TSUNAMI suite used to calculate eigenvalue sensitivity coefficients in three-dimensional models. The SCALE 6.1 code system includes only the multigroup (MG) mode for three-dimensional sensitivity analyses; however, the upcoming release of SCALE 6.2 will feature the first implementation of continuous-energy (CE) sensitivity methods in SCALE. For MG calculations, TSUNAMI-3D provides resonance self-shielding of cross-section data, calculation of the implicit effects of resonance self-shielding calculations, calculation of forward and adjoint Monte Carlo neutron transport solutions, and calculation of sensitivity coefficients. In CE-TSUNAMI, the sensitivity coefficients are computed in a single forward Monte Carlo neutron transport calculation. The two different approaches for calculating eigenvalue sensitivity coefficients in CE-TSUNAMI are the Iterated Fission Probability (IFP) and the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) methods. Unlike IFP, CLUTCH has a significantly lower memory footprint, is faster, and has been implemented with parallel capability; however, CLUTCH requires additional input parameters, which require additional user expertise. This work summarizes the results of TSUNAMI-3D calculations using both MG and CE CLUTCH methods for various systems in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) using the SCALE code package developed at Oak Ridge National Laboratory. The critical benchmark experiments will cover both the KENO V.a and KENO-VI codes using the ENDF/B-VII.0 data for the different evaluations. The broad range of types of systems will expand the experience base with the CE-TSUNAMI CLUTCH method by identifying best practices for using the code, and provide generic user guidance for utilizing this new capability. Additionally, the study aims to demonstrate the accuracy and usefulness of the CE-TSUNAMI CLUTCH method, especially for systems for which MG methods perform poorly

    JIDOKA. Integration of Human and AI within Industry 4.0 Cyber Physical Manufacturing Systems

    Get PDF
    This book is about JIDOKA, a Japanese management technique coined by Toyota that consists of imbuing machines with human intelligence. The purpose of this compilation of research articles is to show industrial leaders innovative cases of digitization of value creation processes that have allowed them to improve their performance in a sustainable way. This book shows several applications of JIDOKA in the quest towards an integration of human and AI within Industry 4.0 Cyber Physical Manufacturing Systems. From the use of artificial intelligence to advanced mathematical models or quantum computing, all paths are valid to advance in the process of humanā€“machine integration

    Optimization of Recombinant Adeno-Associated Virus (aav) Vector Production in Saccharomyces Cerevisiae

    Get PDF
    Recombinant adeno-associated viral vectors (rAAV) are emerging drugs for gene therapy applications. Their non-pathogenic status, low inflammatory potential, availability of viral serotypes with different tissue tropisms, and prospective long-lasting gene expression are important attributes that make rAAVs safe and efficient therapeutic options. One of the main limitations for bringing rAAV gene therapy to the market is the difficulty in supplying enough rAAV vector product. The high vector doses suggested by early clinical data infer the need to scale up production at high volumes in order to satisfy patient demand. Current production platforms such as HEK293 or Sf9 cells are very efficient but up to date, scalability issues limit their use to preclinical and phase I/II production campaigns. Our team recently developed a novel rAAV-producing yeast strain, which recapitulated key molecular processes for vector particle formation (Barajas et al., 2017). The use of a microbial system for vector production would represent an affordable and highly scalable platform for industrial production. Preliminary data showed low vector yields, possibly associated to very low DNA encapsidation rate. The present thesis aims at getting clues about the molecular and bioprocessing factors that could be impacting vector yield in the novel rAAV-producing yeast system. In one approach, we performed a proteomic profiling of the yeast host response to rAAV protein expression. By using mass spectrometry and bioinformatics tools, we were able to identify trends in protein expression associated to vector formation. Gene ontology enrichment and network interaction analyses highlighted five specific cellular events: protein folding activity linked to unfolded protein response, proteasomal degradation activity, oxidation-reduction processes linked to oxidative stress, protein biosynthesis, and carbon metabolism. We speculated that some of these processes might be directly or indirectly linked to vector production constraints. A protein overexpression strategy was tested by transforming yeast with 2-micron plasmids carrying expression cassettes for 19 host cell proteins identified in the profiling. Increased vector yield was obtained in yeast strains overexpressing proteins SSA2, SSE1, SSE2, CCP1, GTT1, and GAL4. On a second approach, we used the yeast system as a means to screen the effect of host protein expression modulation on rAAV DNA replication and vector yield, by using the yTHC library strains (R1158-derived) and a set of 2 plasmids that confer all rAAV genetic elements. More than 850 strains, each one with a single host gene under a TET-repressible promoter, were screened in duplicates. From preliminary screenings, we identified 22 gene candidates that improved rAAV DNA replication (rAAV-GFP/18s rDNA ratio) and vector yield (benzonase-resistant rAAV DNA vector genome titer) as high as 6-fold and 15-fold relative to control, respectively. The candidate proteins participate in various biological processes such as DNA replication, ribosome biogenesis, and RNA and protein processing. The top five candidates (PRE4, HEM4, TOP2, GPN3, SDO1) were further screened by generating overexpression mutants in another yeast strain (YPH500). Subsequent clone evaluation was performed to confirm the rAAV-promoting activity of selected candidates under plate-based and bioreactor-controlled fermentation conditions. Our results highlighted HEM4 and TOP2 proteins as enhancers of rAAV2 vector yield in the yeast model. A final approach was focused on bioprocessing studies intended to develop a fed-batch fermentation process for rAAV2 vector production. Preliminary characterization studies performed in shake flasks provided useful data regarding rAAV DNA replication and vector formation in yeast over time, as well as optimal pH and temperature values for fermentation. Results suggested extending the original process to four days of galactose induction, and operating values of starting pH and temperature of 4.8 and 30Ā°C, respectively. An additional media optimization study was performed to identify critical media components for optimal vector yield. A 3-fold increase was obtained after supplementing the galactose induction media with lysine, pyridoxin, myo-inositol, ferric chloride, and cysteine. We were able to translate a shake flask-based, batch process with medium replacement to a bioreactor-controlled fed-batch process. Low and moderate cell culture strategies were performed, controlling pH, DO %, and temperature. Additional studies were done to optimize growth rate, glucose and galactose feed, and induction strategy. However, final yields at moderate cell densities were comparable to the ones obtained at low cell densities, suggesting the presence of unknown factors that might be impacting per cell productivity. These three independent approaches provided important information regarding molecular and process strategies to optimize rAAV vector yield. Follow-up studies need to be done to consolidate yeast strain development and fermentation development efforts into a robust yeast platform potentially useful for industrial vector production

    Pin-Wise Loading Optimization and Latticeā€“to-Core Coupling for Isotopic Management in Light Water Reactors

    Get PDF
    A generalized software capability has been developed for the pin-wise loading optimization of light water reactor (LWR) fuel lattices with the enhanced flexibility of control variables that characterize heterogeneous or blended target pins loaded with non-standard compositions, such as minor actinides (MAs). Furthermore, this study has developed the software coupling to evaluate the performance of optimized lattices outside their reflective boundary conditions and within the realistic three-dimensional core-wide environment of a LWR. The illustration of the methodologies and software tools developed helps provide a deeper understanding of the behavior of optimized lattices within a full core environment. The practical applications include the evaluation of the recycling (destruction) of ā€œundesirableā€ minor actinides from spent nuclear fuel such as Am-241 in a thermal reactor environment, as well as the timely study of planting Np-237 (blended NpO2 + UO2) targets in the guide tubes of typical commercial pressurized water reactor (PWR) bundles for the production of Pu-238, a highly ā€œdesirableā€ radioisotope used as a heat source in radioisotope thermoelectric generators (RTGs). Both of these applications creatively stretch the potential utility of existing commercial nuclear reactors into areas historically reserved to research or hypothetical next-generation facilities. In an optimization sense, control variables include the loadings and placements of materials; U-235, burnable absorbers, and MAs (Am-241 or Np-237), while the objective functions are either the destruction (minimization) of Am-241 or the production (maximization) of Pu-238. The constraints include the standard reactivity and thermal operational margins of a commercial nuclear reactor. Aspects of the optimization, lattice-to-core coupling, and tools herein developed were tested in a concurrent study (Galloway, 2010) in which heterogeneous lattices developed by this study were coupled to three-dimensional boiling water reactor (BWR) core simulations and showed incineration rates of Am-241 targets of around 90%. This study focused primarily upon PWR demonstrations, whereby a benchmarked reference equilibrium core was used as a test bed for MA-spiked lattices and was shown to satisfy standard PWR reactivity and thermal operational margins while exhibiting consistently high destruction rates of Am-241 and Np to Pu conversion rates of approximately 30% for the production of Pu-238

    Potential-based Formulations of the Navier-Stokes Equations and their Application

    Get PDF
    Based on a Clebsch-like velocity representation and a combination of classical variational principles for the special cases of ideal and Stokes flow a novel discontinuous Lagrangian is constructed; it bypasses the known problems associated with non-physical solutions and recovers the classical Navier-Stokes equations together with the balance of inner energy in the limit when an emerging characteristic frequency parameter tends to infinity. Additionally, a generalized Clebsch transformation for viscous flow is established for the first time. Next, an exact first integral of the unsteady, three-dimensional, incompressible Navier-Stokes equations is derived; following which gauge freedoms are explored leading to favourable reductions in the complexity of the equation set and number of unknowns, enabling a self-adjoint variational principle for steady viscous flow to be constructed. Concurrently, appropriate commonly occurring physical and auxiliary boundary conditions are prescribed, including establishment of a first integral for the dynamic boundary condition at a free surface. Starting from this new formulation, three classical flow problems are considered, the results obtained being in total agreement with solutions in the open literature. A new least-squares finite element method based on the first integral of the steady two-dimensional, incompressible, Navier-Stokes equations is developed, with optimal convergence rates established theoretically. The method is analysed comprehensively, thoroughly validated and shown to be competitive when compared to a corresponding, standard, primitive-variable, finite element formulation. Implementation details are provided, and the well-known problem of mass conservation addressed and resolved via selective weighting. The attractive positive definiteness of the resulting linear systems enables employment of a customized scalable algebraic multigrid method for efficient error reduction. The solution of several engineering related problems from the fields of lubrication and film flow demonstrate the flexibility and efficiency of the proposed method, including the case of unsteady flow, while revealing new physical insights of interest in their own right

    Small Electric Vehicles

    Get PDF
    This edited open access book gives a comprehensive overview of small and lightweight electric three- and four-wheel vehicles with an international scope. The present status of small electric vehicle (SEV) technologies, the market situation and main hindering factors for market success as well as options to attain a higher market share including new mobility concepts are highlighted. An increased usage of SEVs can have different impacts which are highlighted in the book in regard to sustainable transport, congestion, electric grid and transport-related potentials. To underline the effects these vehicles can have in urban areas or rural areas, several case studies are presented covering outcomes of pilot projects and studies in Europe. A study of the operation and usage in the Global South extends the scope to a global scale. Furthermore, several concept studies and vehicle concepts on the market give a more detailed overview and show the deployment in different applications

    Development and validation of a neural network for adaptive gait cycle detection from kinematic data

    Get PDF
    (1) Background: Instrumented gait analysis is a tool for quantification of the different aspects of the locomotor system. Gait analysis technology has substantially evolved over the last decade and most modern systems provide real-time capability. The ability to calculate joint angles with low delays paves the way for new applications such as real-time movement feedback, like control of functional electrical stimulation in the rehabilitation of individuals with gait disorders. For any kind of therapeutic application, the timely determination of different gait phases such as stance or swing is crucial. Gait phases are usually estimated based on heuristics of joint angles or time points of certain gait events. Such heuristic approaches often do not work properly in people with gait disorders due to the greater variability of their pathological gait pattern. To improve the current state-ofthe- art, this thesis aims to introduce a data-driven approach for real-time determination of gait phases from kinematic variables based on long short-term memory recurrent neural networks (LSTM RNNs). (2) Methods: In this thesis, 56 measurements with gait data of 11 healthy subjects, 13 individuals with incomplete spinal cord injury and 10 stroke survivors with walking speeds ranging from 0.2 m s up to 1 m s were used to train the networks. Each measurement contained kinematic data from the corresponding subject walking on a treadmill for 90 seconds. Kinematic data was obtained by measuring the positions of reflective markers on body landmarks (Helen Hayes marker set) with a sample rate of 60Hz. For constructing a ground truth, gait data was annotated manually by three raters. Two approaches, direct regression of gait phases and estimation via detection of the gait events Initial Contact and Final Contact were implemented for evaluation of the performance of LSTM RNNs. For comparison of performance, the frequently cited coordinate- and velocity-based event detection approaches of Zeni et al. were used. All aspects of this thesis have been implemented within MATLAB Version 9.6 using the Deep Learning Toolbox. (3) Results: The mean time difference between events annotated by the three raters was āˆ’0.07 Ā± 20.17ms. Correlation coefficients of inter-rater and intra-rater reliability yielded mainly excellent or perfect results. For detection of gait events, the LSTM RNN algorithm covered 97.05% of all events within a scope of 50ms. The overall mean time difference between detected events and ground truth was āˆ’11.62 Ā± 7.01ms. Temporal differences and deviations were consistently small over different walking speeds and gait pathologies. Mean time difference to the ground truth was 13.61 Ā± 17.88ms for the coordinate-based approach of Zeni et al. and 17.18 Ā± 15.67ms for the velocity-based approach. For estimation of gait phases, the gait phase was determined as a percentage. Mean squared error to the ground truth was 0.95 Ā± 0.55% for the proposed algorithm using event detection and 1.50 Ā± 0.55% for regression. For the approaches of Zeni et al., mean squared error was 2.04Ā±1.23% for the coordinate-based approach and 2.24Ā±1.34% for the velocity-based approach. Regarding mean absolute error to the ground truth, the proposed algorithm achieved a mean absolute error of 1.95Ā±1.10% using event detection and one of 7.25 Ā± 1.45% using regression. Mean absolute error for the coordinate-based approach of Zeni et al. was 4.08Ā±2.51% and 4.50Ā±2.73% for the velocity-based approach. (4) Conclusion: The newly introduced LSTM RNN algorithm offers a high recognition rate of gait events with a small delay. Its performance outperforms several state-of-theart gait event detection methods while offering the possibility for real-time processing and high generalization of trained gait patterns. Additionally, the proposed algorithm is easy to integrate into existing applications and contains parameters that self-adapt to individualsā€™ gait behavior to further improve performance. In respect to gait phase estimation, the performance of the proposed algorithm using event detection is in line with current wearable state-of-the-art methods. Compared with conventional methods, performance of direct regression of gait phases is only moderate. Given the results, LSTM RNNs demonstrate feasibility regarding event detection and are applicable for many clinical and research applications. They may be not suitable for the estimation of gait phases via regression. For LSTM RNNs, it can be assumed, that with a more optimal configuration of the networks, a much higher performance is achieved

    Optimization of the value chain of the existing free potentials of wood resources for power generation in Baden-WĆ¼rttemberg

    Get PDF
    The energy mix of Baden-WĆ¼rttemberg ā€“ one of the most wooded regions of Germany ā€“ could be diversified through the optimal valorisation of the existing free potentials of wood resources. Circa 17 PJ of forest residues and landscape wood raw material grow annually over the territory of this federal state. For this reason, an optimisation of the corresponding value chain for power purposes is accomplished in order to identify the most cost-efficient utilisation pathways. Firstly, each unexploited potential of wood resources for up to ten different types of wood chips is estimated at district level. Next, the stages of felling, extraction, debranching, moving and chipping of wood resources are modelled into four specific logistic chains on the basis of the size of forest ownership, the steepness of slope and the variety of tree. Moreover, specific unit costs based on different cost allocation procedures are assigned to the ten identified types of chipped wood resources. Besides the modelling of the transport sector, an array of all feasible technologies for conversion of wood resources into bio-based power are compared to each other in terms of costs. A singular conclusion is drawn according to which, for each particular capacity under the same operation conditions, gasification is more cost-efficient than combustion ā€“ except for co-firing. Hence, the fluidised bed gasification coupled to a gas engine or a combined cycle as well as the direct co-firing of wood resources at a 10% co-fire rate are preselected for the intended analysis on account of their higher cost-effectiveness. Lastly, a new MILP model called BioESyMO (Bioenergy System Model for Operation Optimisation) is created for the optimisation of the value chain of wood resources. This optimising tool includes a unique mathematical constraint aiming at assuring profitability of investments within each utilisation pathway. A scenario-based analysis is first developed for remunerations modelled with a high enough value above the breakeven point. Thereby, a combined heat and power cogeneration process consisting of a fluidised bed gasifier coupled to a gas engine of 20 MWe renders electricity production costs of 10.1-13.8 ā‚¬cent/kWhe for an annual amount of 7,500 full load hours. The co-firing option for the existing coal-fired power plants with bio-based capacities up to 84.3 MWe generates lower electricity production costs of 6.6-11.7 ā‚¬cent/kWhe, when the facilities are yearly operated for 3,000 full load hours. If a fluidised bed gasifier is connected to a combined cycle of 210/340 MWe (7,500 full load hours per year), this technology turns out to be the most cost-efficient with electricity production costs in the order of 5.6-7.1 ā‚¬cent/kWhe. These costs ranges can be reduced by progressively decreasing remunerations below each resulting breakeven point. As for the option of co-firing, cheaper bioenergy configurations arise on the basis of cheaper wood resources that enable lower production costs of up to 5.6 ā‚¬cent/kWhe for 4,000 hours per year at full load. In conclusion, the low incremental capital costs of co-firing as well as the high efficiencies of fluidised bed gasification-based combined cycles together with the valorisation of the more economical deciduous fractions of wood resources might reduce electricity production costs to a rather low range between 4.5 and 9.5 ā‚¬cent/kWhe. Leveraging such cost reductions, the introduction of appropriate energy policy instruments for the promotion of carbon-neutral baseload power generation is strongly recommended in view of restrictions induced by Germanyā€™s nuclear and coal phase-outs. Although the quality of the results of this study is mainly conditioned by uncertainty and the high spatial aggregation level of the spatial unit, the implemented methodology as well as the performed optimisation analysis represents an interesting breakthrough that may contribute to the initiated energy transition in Baden-WĆ¼rttemberg and the whole of Germany
    • ā€¦
    corecore