4,097 research outputs found

    How good are your fits? Unbinned multivariate goodness-of-fit tests in high energy physics

    Full text link
    Multivariate analyses play an important role in high energy physics. Such analyses often involve performing an unbinned maximum likelihood fit of a probability density function (p.d.f.) to the data. This paper explores a variety of unbinned methods for determining the goodness of fit of the p.d.f. to the data. The application and performance of each method is discussed in the context of a real-life high energy physics analysis (a Dalitz-plot analysis). Several of the methods presented in this paper can also be used for the non-parametric determination of whether two samples originate from the same parent p.d.f. This can be used, e.g., to determine the quality of a detector Monte Carlo simulation without the need for a parametric expression of the efficiency.Comment: 32 pages, 12 figure

    QED and Electroweak Corrections to Deep Inelastic Scattering

    Get PDF
    We describe the state of the art in the field of radiative corrections for deep inelastic scattering. Different methods of calculation of radiative corrections are reviewed. Some new results for QED radiative corrections for polarized deep inelastic scattering at HERA are presented. A comparison of results obtained by the codes POLRAD and HECTOR is given for the kinematic regime of the HERMES experiment. Recent results on radiative corrections to deep inelastic scattering with tagged photons are briefly discussed.Comment: 22 pages Latex, including 6 eps-figures; to appear in the Proceedings of the 3rd International Symposium on Radiative Corrections, Cracow, August 1-5, 1996, Acta Phys. Polonica

    Non-convex Optimization for Machine Learning

    Full text link
    A vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks. The freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) relaxed optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization. On the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques - popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties. This monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. The monograph will lead the reader through several widely used non-convex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.Comment: The official publication is available from now publishers via http://dx.doi.org/10.1561/220000005
    corecore