27,726 research outputs found

    Standard model explanation of a CDF dijet excess in Wjj

    Full text link
    We demonstrate the recent observation of a peak in the dijet invariant mass of the Wjj signal observed by the CDF Collaboration can be explained as the same upward fluctuation observed by CDF in single-top-quark production. In general, both t-channel and s-channel single-top-quark production produce kinematically induced peaks in the dijet spectrum. Since CDF used a Monte Carlo simulation to subtract the single-top backgrounds instead of data, a peak in the dijet spectrum is expected. The D0 Collaboration has a small upward fluctuation in their published t-channel data; and hence we predict they would see at most a small peak in the dijet invariant mass spectrum of Wjj if they follow the same procedure as CDF.Comment: 3 pg., 2 figs, revtex, minor clarifications, to appear in Phys. Rev.

    Angular correlations in single-top-quark and Wjj production at next-to-leading order

    Full text link
    I demonstrate that the correlated angular distributions of final-state particles in both single-top-quark production and the dominant Wjj backgrounds can be reliably predicted. Using these fully-correlated angular distributions, I propose a set of cuts that can improve the single-top-quark discovery significance by 25%, and the signal to background ratio by a factor of 3 with very little theoretical uncertainty. Up to a subtlety in t-channel single-top-quark production, leading-order matrix elements are shown to be sufficient to reproduce the next-to-leading order correlated distributions.Comment: 22 pages, 23 figs, RevTex4, fixed typos, to appear in Phys. Rev.

    Strong convergence rates of probabilistic integrators for ordinary differential equations

    Get PDF
    Probabilistic integration of a continuous dynamical system is a way of systematically introducing model error, at scales no larger than errors introduced by standard numerical discretisation, in order to enable thorough exploration of possible responses of the system to inputs. It is thus a potentially useful approach in a number of applications such as forward uncertainty quantification, inverse problems, and data assimilation. We extend the convergence analysis of probabilistic integrators for deterministic ordinary differential equations, as proposed by Conrad et al.\ (\textit{Stat.\ Comput.}, 2017), to establish mean-square convergence in the uniform norm on discrete- or continuous-time solutions under relaxed regularity assumptions on the driving vector fields and their induced flows. Specifically, we show that randomised high-order integrators for globally Lipschitz flows and randomised Euler integrators for dissipative vector fields with polynomially-bounded local Lipschitz constants all have the same mean-square convergence rate as their deterministic counterparts, provided that the variance of the integration noise is not of higher order than the corresponding deterministic integrator. These and similar results are proven for probabilistic integrators where the random perturbations may be state-dependent, non-Gaussian, or non-centred random variables.Comment: 25 page

    The Optimal Uncertainty Algorithm in the Mystic Framework

    Get PDF
    We have recently proposed a rigorous framework for Uncertainty Quantification (UQ) in which UQ objectives and assumption/information set are brought into the forefront, providing a framework for the communication and comparison of UQ results. In particular, this framework does not implicitly impose inappropriate assumptions nor does it repudiate relevant information. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that given a set of assumptions and information, there exist bounds on uncertainties obtained as values of optimization problems and that these bounds are optimal. It provides a uniform environment for the optimal solution of the problems of validation, certification, experimental design, reduced order modeling, prediction, extrapolation, all under aleatoric and epistemic uncertainties. OUQ optimization problems are extremely large, and even though under general conditions they have finite-dimensional reductions, they must often be solved numerically. This general algorithmic framework for OUQ has been implemented in the mystic optimization framework. We describe this implementation, and demonstrate its use in the context of the Caltech surrogate model for hypervelocity impact

    Optimal Uncertainty Quantification

    Get PDF
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as extreme values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions, they have finite-dimensional reductions. As an application, we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid type. Surprisingly, contrary to the classical sensitivity analysis paradigm, these results show that uncertainties in input parameters do not necessarily propagate to output uncertainties. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact, suggesting the feasibility of the framework for important complex systems

    Curve fits of predicted inviscid stagnation-point radiative heating rates, cooling factors, and shock standoff distances for hyperbolic earth entry

    Get PDF
    Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry

    Optimal uncertainty quantification for legacy data observations of Lipschitz functions

    Get PDF
    We consider the problem of providing optimal uncertainty quantification (UQ) --- and hence rigorous certification --- for partially-observed functions. We present a UQ framework within which the observations may be small or large in number, and need not carry information about the probability distribution of the system in operation. The UQ objectives are posed as optimization problems, the solutions of which are optimal bounds on the quantities of interest; we consider two typical settings, namely parameter sensitivities (McDiarmid diameters) and output deviation (or failure) probabilities. The solutions of these optimization problems depend non-trivially (even non-monotonically and discontinuously) upon the specified legacy data. Furthermore, the extreme values are often determined by only a few members of the data set; in our principal physically-motivated example, the bounds are determined by just 2 out of 32 data points, and the remainder carry no information and could be neglected without changing the final answer. We propose an analogue of the simplex algorithm from linear programming that uses these observations to offer efficient and rigorous UQ for high-dimensional systems with high-cardinality legacy data. These findings suggest natural methods for selecting optimal (maximally informative) next experiments.Comment: 38 page

    Structural templating as a route to improved photovoltaic performance in copper phthalocyanine/fullerene (C60) heterojunctions

    Get PDF
    We have developed a method to improve the short circuit current density in copper phthalocyanine (CuPc)/fullerene (C60) organic solar cells by ~60% by modifying the CuPc crystal orientation through use of a molecular interlayer to maximize charge transport in the direction between the two electrodes. Powder x-ray diffraction and electronic absorption spectroscopy show that a thin 3,4,9,10-perylenetetracarboxylic dianhydride interlayer before CuPc growth templates the CuPc film structure, forcing the molecules to lie flat with respect to the substrate surface, although the intrastack orientation is unaffected. This modified stacking configuration facilitates charge transport and improves charge collection
    corecore