12,223 research outputs found

    Next-to-leading order QCD corrections to high-p_T pion production in longitudinally polarized pp collisions

    Get PDF
    We present a calculation for single-inclusive large-p_T pion production in longitudinally polarized pp collisions in next-to-leading order QCD. We choose an approach where fully analytical expressions for the underlying partonic hard-scattering cross sections are obtained. We simultaneously rederive the corresponding corrections to unpolarized scattering and confirm the results existing in the literature. Our results allow to calculate the double-spin asymmetry A_LL^pi for this process at next-to-leading order, which will soon be used at BNL-RHIC to measure the polarization of gluons in the nucleon.Comment: 23 pages, LaTeX, 6 figures as eps file

    High-Dimensional Density Ratio Estimation with Extensions to Approximate Likelihood Computation

    Full text link
    The ratio between two probability density functions is an important component of various tasks, including selection bias correction, novelty detection and classification. Recently, several estimators of this ratio have been proposed. Most of these methods fail if the sample space is high-dimensional, and hence require a dimension reduction step, the result of which can be a significant loss of information. Here we propose a simple-to-implement, fully nonparametric density ratio estimator that expands the ratio in terms of the eigenfunctions of a kernel-based operator; these functions reflect the underlying geometry of the data (e.g., submanifold structure), often leading to better estimates without an explicit dimension reduction step. We show how our general framework can be extended to address another important problem, the estimation of a likelihood function in situations where that function cannot be well-approximated by an analytical form. One is often faced with this situation when performing statistical inference with data from the sciences, due the complexity of the data and of the processes that generated those data. We emphasize applications where using existing likelihood-free methods of inference would be challenging due to the high dimensionality of the sample space, but where our spectral series method yields a reasonable estimate of the likelihood function. We provide theoretical guarantees and illustrate the effectiveness of our proposed method with numerical experiments.Comment: With supplementary materia

    Prototype selection for parameter estimation in complex models

    Full text link
    Parameter estimation in astrophysics often requires the use of complex physical models. In this paper we study the problem of estimating the parameters that describe star formation history (SFH) in galaxies. Here, high-dimensional spectral data from galaxies are appropriately modeled as linear combinations of physical components, called simple stellar populations (SSPs), plus some nonlinear distortions. Theoretical data for each SSP is produced for a fixed parameter vector via computer modeling. Though the parameters that define each SSP are continuous, optimizing the signal model over a large set of SSPs on a fine parameter grid is computationally infeasible and inefficient. The goal of this study is to estimate the set of parameters that describes the SFH of each galaxy. These target parameters, such as the average ages and chemical compositions of the galaxy's stellar populations, are derived from the SSP parameters and the component weights in the signal model. Here, we introduce a principled approach of choosing a small basis of SSP prototypes for SFH parameter estimation. The basic idea is to quantize the vector space and effective support of the model components. In addition to greater computational efficiency, we achieve better estimates of the SFH target parameters. In simulations, our proposed quantization method obtains a substantial improvement in estimating the target parameters over the common method of employing a parameter grid. Sparse coding techniques are not appropriate for this problem without proper constraints, while constrained sparse coding methods perform poorly for parameter estimation because their objective is signal reconstruction, not estimation of the target parameters.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS500 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Electrostatic Field Classifier for Deficient Data

    Get PDF
    This paper investigates the suitability of recently developed models based on the physical field phenomena for classification problems with incomplete datasets. An original approach to exploiting incomplete training data with missing features and labels, involving extensive use of electrostatic charge analogy, has been proposed. Classification of incomplete patterns has been investigated using a local dimensionality reduction technique, which aims at exploiting all available information rather than trying to estimate the missing values. The performance of all proposed methods has been tested on a number of benchmark datasets for a wide range of missing data scenarios and compared to the performance of some standard techniques. Several modifications of the original electrostatic field classifier aiming at improving speed and robustness in higher dimensional spaces are also discussed

    High harmonic generation from Bloch electrons in solids

    Full text link
    We study the generation of high harmonic radiation by Bloch electrons in a model transparent solid driven by a strong mid-infrared laser field. We solve the single-electron time-dependent Schr\"odinger equation (TDSE) using a velocity-gauge method [New J. Phys. 15, 013006 (2013)] that is numerically stable as the laser intensity and number of energy bands are increased. The resulting harmonic spectrum exhibits a primary plateau due to the coupling of the valence band to the first conduction band, with a cutoff energy that scales linearly with field strength and laser wavelength. We also find a weaker second plateau due to coupling to higher-lying conduction bands, with a cutoff that is also approximately linear in the field strength. To facilitate the analysis of the time-frequency characteristics of the emitted harmonics, we also solve the TDSE in a time-dependent basis set, the Houston states [Phys. Rev. B 33, 5494 (1986)], which allows us to separate inter-band and intra-band contributions to the time-dependent current. We find that the inter-band and intra-band contributions display very different time-frequency characteristics. We show that solutions in these two bases are equivalent under an unitary transformation but that, unlike the velocity gauge method, the Houston state treatment is numerically unstable when more than a few low lying energy bands are used

    Hard diffractive electroproduction of two pions

    Get PDF
    We have calculated the leading order amplitude of hard diffractive electroproduction of two pions in lepton nucleon scattering. At the leading twist level a pion pair can be produced only in an isospin one or zero state. We have shown that isoscalar states are produced dominantly for x_{Bj}>0.3 and with an invariant mass of the two pions close to the threshold (S-wave) and in the f_2 resonance region (D-wave). These isoscalar pion pairs are dominantly produced by two collinear gluons. Comparing the production of charged and neutral pion pairs as a function of x_{Bj} and m_pipi one can get information about the gluonic component of two-pion distribution amplitudes.Comment: Estimates of angular distributions are adde
    corecore