25 research outputs found

    Present status of the NET IBK computer code package for in-core fuel management and related core parameter calculations

    Get PDF
    This paper presents and discusses the current status of the NET IBK (Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences) computer code package for nuclear analysis of power reactors and in-core fuel management The standard scheme for reactor fuel burnup analysis comprises the WIMS code and several 2 D (RZ or XY) and 3 D (XYZ) codes for overall reactor core calculations and criticality search They are coupled and modified to compute neutron flux, power density distribution and burnup taking into account spatial variations of temperature and xenon poisoning, as well as the reactivity changes due to xenon transients during the start-up and shut-down Presently, codes for overall reactor calculations are based on finite difference solution of group diffusion equations Efforts are being made to improve reactor cell and fuel assembly parameters calculations, and to develop advanced methods for solving diffusion equations Also, an optimization model, based on coarse zonal discretization of a reactor core is being developed for optimal fuel loading pattern search The NET IBK computer code package has been extensively used to study advanced fuel utilization schemes in different types of power reactors, as well as for solving in-core fuel management problems of the own research reactors Particular attention has been paid to experimental verification of the calculational procedures In this paper, a number of interesting results is presented and discussed 15 refs, 6 figs, 7 tabsINIS record: [http://inis.iaea.org/search/search.aspx?orig_q=RN:22023261

    Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences.</p> <p>Results</p> <p>The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%.</p> <p>Conclusions</p> <p>This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.</p

    Adaptive estimation of the prompt-neutron decay constant using autoregressive moving average modeling

    No full text
    An autoregressive moving average model of neutron fluctuations with large measurement noise is developed from the Langevin stochastic equations with the noise equivalent source in the form of a vector Wiener process. The neutron field/detector interaction is explicitly treated, and delayed neutrons are included. The Kalman filter with nonzero covariance between input and output noise is applied in the derivations to reduce the state-space equations to the input-output form. Theoretical developments are verified using time series data from the prompt-neutron decay constant measurements at the zero-power reactor RE in Vinca. Model parameters are estimated by the maximum likelihood off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method with implemented regularization and stability control. The results show that subcriticality can be estimated from real data with high measurement noise using a shorter statistical sample than in standard methods based on the power spectral density or the Feynman variance-to-mean ratio method

    PIPA: A High-Throughput Pipeline for Protein Function Annotation

    No full text
    We developed Pipeline for Protein Annotation (PIPA), a genome-wide protein function annotation pipeline that runs in a high performance computing environment. PIPA integrates different tools and employs the Gene Ontology (GO) to provide consistent annotation and resolve prediction conflicts. PIPA has three modules that allow for easy development of specialized databases and integration of various bioinformatics tools. The first module, the pipeline execution module, consists of programs that enable the user access to and control of the pipelinepsilas parallel execution of multiple jobs, each searching a particular database for a chunk of the input data. The execution module wraps the second module, the core pipeline module. The integrated resources, the program for terminology conversion to GO, and the consensus annotation program constitute the main components of the core module. The third module is the preprocessing module. This last module contains the program for customized generation of protein function databases and the GO-mapping generation program, which creates GO mappings for the terminology conversion program. The current implementation of PIPA annotates protein functions by combining the results of an inhouse-developed database for enzyme catalytic function prediction (CatFam) and the results of multiple integrated resources, such as the 11 member databases of InterPro and the Conserved Domains Database, into common GO terms. A Web-page-based graphical user interface is developed based on the User Interface Toolkit. The pipeline is deployed on two LINUX clusters, JVN at the Army Research Laboratory Major Shared Resource Center and JAWS at the Maui High Performance Computing Center. Currently, scientists at the Naval Medical Research Center are using PIPA to predict protein functions for newly sequenced bacterial pathogens and their near-neighbor strains. Validation tests show that, on average, the CatFam database yields predictions of enzyme catalytic functions with accuracy greater than 95%. Test results of the consensus GO annotation show an improvement in performance of up to 8% when compared with annotations in which consensus is not used

    Present status of the NET IBK computer code package for in-core fuel management and related core parameter calculations

    No full text
    This paper presents and discusses the current status of the NET IBK (Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences) computer code package for nuclear analysis of power reactors and in-core fuel management The standard scheme for reactor fuel burnup analysis comprises the WIMS code and several 2 D (RZ or XY) and 3 D (XYZ) codes for overall reactor core calculations and criticality search They are coupled and modified to compute neutron flux, power density distribution and burnup taking into account spatial variations of temperature and xenon poisoning, as well as the reactivity changes due to xenon transients during the start-up and shut-down Presently, codes for overall reactor calculations are based on finite difference solution of group diffusion equations Efforts are being made to improve reactor cell and fuel assembly parameters calculations, and to develop advanced methods for solving diffusion equations Also, an optimization model, based on coarse zonal discretization of a reactor core is being developed for optimal fuel loading pattern search The NET IBK computer code package has been extensively used to study advanced fuel utilization schemes in different types of power reactors, as well as for solving in-core fuel management problems of the own research reactors Particular attention has been paid to experimental verification of the calculational procedures In this paper, a number of interesting results is presented and discussed 15 refs, 6 figs, 7 tab
    corecore