993 research outputs found

    Protection of stainless-steels against corrosion in sulphidizing environments by Ce oxide coatings: X-ray absorption and thermogravimetric studies

    Get PDF
    In this paper a study is reported concerning ceramic coatings containing cerium oxide, prepared by the sol-gel method, used to protect Incoloy 800H against sulphidation. When the coating is sintered in air at 850Ā°C good protection is obtained. In an X-ray absorption spectroscopic study of the coatings it was observed that the best protective coating contains all cerium as CeIV after pretreatment. After sulphidizing cerium was reduced to CeIII. Possible mechanisms to explain the protective properties are discussed

    Determination of the relative concentrations of rare earth ions by x-ray absorption spectroscopy: Application to terbium mixed oxides

    Get PDF
    A method, based on X-ray absorption spectroscopy (XAS) in the range 0.8ā€“1.5 keV, to determine the relative amounts of rare earth ions in different valencies is explained and tested for the case of terbium mixed oxides. The results are in agreement with those obtained by existing analytical methods. The XAS method is advantageous in that it can be applied where other, conventional, methods break down

    Fitting in a complex chi^2 landscape using an optimized hypersurface sampling

    Full text link
    Fitting a data set with a parametrized model can be seen geometrically as finding the global minimum of the chi^2 hypersurface, depending on a set of parameters {P_i}. This is usually done using the Levenberg-Marquardt algorithm. The main drawback of this algorithm is that despite of its fast convergence, it can get stuck if the parameters are not initialized close to the final solution. We propose a modification of the Metropolis algorithm introducing a parameter step tuning that optimizes the sampling of parameter space. The ability of the parameter tuning algorithm together with simulated annealing to find the global chi^2 hypersurface minimum, jumping across chi^2{P_i} barriers when necessary, is demonstrated with synthetic functions and with real data

    Application of mathematical and machine learning techniques to analyse eye tracking data enabling better understanding of childrenā€™s visual cognitive behaviours

    Get PDF
    In this research, we aimed to investigate the visual-cognitive behaviours of a sample of 106 children in Year 3 (8.8 Ā± 0.3 years) while completing a mathematics bar-graph task. Eye movements were recorded while children completed the task and the patterns of eye movements were explored using machine learning approaches. Two different techniques of machine-learning were used (Bayesian and K-Means) to obtain separate model sequences or average scanpaths for those children who responded either correctly or incorrectly to the graph task. Application of these machine-learning approaches indicated distinct differences in the resulting scanpaths for children who completed the graph task correctly or incorrectly: children who responded correctly accessed information that was mostly categorised as critical, whereas children responding incorrectly did not. There was also evidence that the children who were correct accessed the graph information in a different, more logical order, compared to the children who were incorrect. The visual behaviours aligned with different aspects of graph comprehension, such as initial understanding and orienting to the graph, and later interpretation and use of relevant information on the graph. The findings are discussed in terms of the implications for early mathematics teaching and learning, particularly in the development of graph comprehension, as well as the application of machine learning techniques to investigations of other visual-cognitive behaviours.Peer reviewe

    Optimal Uncertainty Quantification

    Get PDF
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop \emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.Comment: 90 pages. Accepted for publication in SIAM Review (Expository Research Papers). See SIAM Review for higher quality figure
    • ā€¦
    corecore