266 research outputs found

    The Optimal Uncertainty Algorithm in the Mystic Framework

    Get PDF
    We have recently proposed a rigorous framework for Uncertainty Quantification (UQ) in which UQ objectives and assumption/information set are brought into the forefront, providing a framework for the communication and comparison of UQ results. In particular, this framework does not implicitly impose inappropriate assumptions nor does it repudiate relevant information. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that given a set of assumptions and information, there exist bounds on uncertainties obtained as values of optimization problems and that these bounds are optimal. It provides a uniform environment for the optimal solution of the problems of validation, certification, experimental design, reduced order modeling, prediction, extrapolation, all under aleatoric and epistemic uncertainties. OUQ optimization problems are extremely large, and even though under general conditions they have finite-dimensional reductions, they must often be solved numerically. This general algorithmic framework for OUQ has been implemented in the mystic optimization framework. We describe this implementation, and demonstrate its use in the context of the Caltech surrogate model for hypervelocity impact

    Optimal Uncertainty Quantification

    Get PDF
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as extreme values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions, they have finite-dimensional reductions. As an application, we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid type. Surprisingly, contrary to the classical sensitivity analysis paradigm, these results show that uncertainties in input parameters do not necessarily propagate to output uncertainties. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact, suggesting the feasibility of the framework for important complex systems

    Higher-order triplet interaction in energy-level modeling of excited-state absorption for an expanded porphyrin cadmium complex

    Get PDF
    Recent measurements of transmission versus fluence for a methanol-solvated asymmetric pentaazadentate porphyrin-like (APPC) cadmium complex, [(C6H4-APPC)Cd]Cl, showed the limitations of current energy-level models in predicting the transmission behavior of organic reverse saturable absorbers at fluences greater than 1 J/cm². A new model has been developed that incorporates higher-order triplet processes and accurately fits both nanosecond and picosecond transmission-versus-fluence data. This model has provided the first known determination of a higher triplet excited-state absorption cross section and lifetime for an APPC complex and also described a previously unreported feature in the transmission-versus-fluence data. The intersystem crossing rate and the previously neglected higher triplet excited-state absorption cross section are shown to govern the excited-state population dynamics of methanol-solvated [(C6H4-APPC)Cd]Cl most strongly at more-practical device energies

    Optimal uncertainty quantification for legacy data observations of Lipschitz functions

    Get PDF
    We consider the problem of providing optimal uncertainty quantification (UQ) --- and hence rigorous certification --- for partially-observed functions. We present a UQ framework within which the observations may be small or large in number, and need not carry information about the probability distribution of the system in operation. The UQ objectives are posed as optimization problems, the solutions of which are optimal bounds on the quantities of interest; we consider two typical settings, namely parameter sensitivities (McDiarmid diameters) and output deviation (or failure) probabilities. The solutions of these optimization problems depend non-trivially (even non-monotonically and discontinuously) upon the specified legacy data. Furthermore, the extreme values are often determined by only a few members of the data set; in our principal physically-motivated example, the bounds are determined by just 2 out of 32 data points, and the remainder carry no information and could be neglected without changing the final answer. We propose an analogue of the simplex algorithm from linear programming that uses these observations to offer efficient and rigorous UQ for high-dimensional systems with high-cardinality legacy data. These findings suggest natural methods for selecting optimal (maximally informative) next experiments.Comment: 38 page

    Machine Learning Changes the Rules for Flux Limiters

    Full text link
    Learning to integrate non-linear equations from highly resolved direct numerical simulations (DNSs) has seen recent interest for reducing the computational load for fluid simulations. Here, we focus on determining a flux-limiter for shock capturing methods. Focusing on flux limiters provides a specific plug-and-play component for existing numerical methods. Since their introduction, an array of flux limiters has been designed. Using the coarse-grained Burgers' equation, we show that flux-limiters may be rank-ordered in terms of their log-error relative to high-resolution data. We then develop theory to find an optimal flux-limiter and present flux-limiters that outperform others tested for integrating Burgers' equation on lattices with 2×2\times, 3×3\times, 4×4\times, and 8×8\times coarse-grainings. We train a continuous piecewise linear limiter by minimizing the mean-squared misfit to 6-grid point segments of high-resolution data, averaged over all segments. While flux limiters are generally designed to have an output of ϕ(r)=1\phi(r) = 1 at a flux ratio of r=1r = 1, our limiters are not bound by this rule, and yet produce a smaller error than standard limiters. We find that our machine learned limiters have distinctive features that may provide new rules-of-thumb for the development of improved limiters. Additionally, we use our theory to learn flux-limiters that outperform standard limiters across a range of values (as opposed to at a specific fixed value) of coarse-graining, number of discretized bins, and diffusion parameter. This demonstrates the ability to produce flux limiters that should be more broadly useful than standard limiters for general applications.Comment: fixed erratum: one corrected figure and some minor text update

    Raman spectrometry study of phonon anharmonicity of hafnia at elevated temperatures

    Get PDF
    Raman spectra of monoclinic hafnium oxide (HfO_2) were measured at temperatures up to 1100 K. Raman peak shifts and broadenings are reported. Phonon dynamics calculations were performed with the shell model to obtain the total and partial phonon density of states, and to identify the individual motions of Hf and O atoms in the Raman modes. Correlating these motions to the thermal peak shifts and broadenings, it was found that modes involving changes in oxygen-oxygen bond length were the most anharmonic. The hafnium-dominated modes were more quasiharmonic and showed less broadening with temperature. Comparatively, the oxygen-dominated modes were more influenced by the cubic term in the interatomic potential than the hafnium-dominated modes. An approximately quadratic correlation was found between phonon-line broadening and softening

    Robust design under uncertainty in quantum error mitigation

    Full text link
    Error mitigation techniques are crucial to achieving near-term quantum advantage. Classical post-processing of quantum computation outcomes is a popular approach for error mitigation, which includes methods such as Zero Noise Extrapolation, Virtual Distillation, and learning-based error mitigation. However, these techniques have limitations due to the propagation of uncertainty resulting from a finite shot number of the quantum measurement. To overcome this limitation, we propose general and unbiased methods for quantifying the uncertainty and error of error-mitigated observables by sampling error mitigation outcomes. These methods are applicable to any post-processing-based error mitigation approach. In addition, we present a systematic approach for optimizing the performance and robustness of these error mitigation methods under uncertainty, building on our proposed uncertainty quantification methods. To illustrate the effectiveness of our methods, we apply them to Clifford Data Regression in the ground state of the XY model simulated using IBM's Toronto noise model.Comment: 9 pages, 5 figure

    AtomSim: web-deployed atomistic dynamics simulator

    Get PDF
    AtomSim, a collection of interfaces for computational crystallography simulations, has been developed. It uses forcefield-based dynamics through physics engines such as the General Utility Lattice Program, and can be integrated into larger computational frameworks such as the Virtual Neutron Facility for processing its dynamics into scattering functions, dynamical functions etc. It is also available as a Google App Engine-hosted web-deployed interface. Examples of a quartz molecular dynamics run and a hafnium dioxide phonon calculation are presented
    corecore