3,805 research outputs found

    Classification without labels: Learning from mixed samples in high energy physics

    Get PDF
    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.Comment: 18 pages, 5 figures; v2: intro extended and references added; v3: additional discussion to match JHEP versio

    On similarity solutions of a boundary layer problem with an upstream moving wall

    Get PDF
    The problem of a boundary layer on a flat plate which has a constant velocity opposite in direction to that of the uniform mainstream is examined. It was previously shown that the solution of this boundary value problem is crucially dependent on the parameter which is the ratio of the velocity of the plate to the velocity of the free stream. In particular, it was proved that a solution exists only if this parameter does not exceed a certain critical value, and numerical evidence was adduced to show that this solution is nonunique. Using Crocco formulation the present work proves this nonuniqueness. Also considered are the analyticity of solutions and the derivation of upper bounds on the critical value of wall velocity parameter

    Pileup Mitigation with Machine Learning (PUMML)

    Full text link
    Pileup involves the contamination of the energy distribution arising from the primary collision of interest (leading vertex) by radiation from soft collisions (pileup). We develop a new technique for removing this contamination using machine learning and convolutional neural networks. The network takes as input the energy distribution of charged leading vertex particles, charged pileup particles, and all neutral particles and outputs the energy distribution of particles coming from leading vertex alone. The PUMML algorithm performs remarkably well at eliminating pileup distortion on a wide range of simple and complex jet observables. We test the robustness of the algorithm in a number of ways and discuss how the network can be trained directly on data.Comment: 20 pages, 8 figures, 2 tables. Updated to JHEP versio

    Learning to Classify from Impure Samples with High-Dimensional Data

    Get PDF
    A persistent challenge in practical classification tasks is that labeled training sets are not always available. In particle physics, this challenge is surmounted by the use of simulations. These simulations accurately reproduce most features of data, but cannot be trusted to capture all of the complex correlations exploitable by modern machine learning methods. Recent work in weakly supervised learning has shown that simple, low-dimensional classifiers can be trained using only the impure mixtures present in data. Here, we demonstrate that complex, high-dimensional classifiers can also be trained on impure mixtures using weak supervision techniques, with performance comparable to what could be achieved with pure samples. Using weak supervision will therefore allow us to avoid relying exclusively on simulations for high-dimensional classification. This work opens the door to a new regime whereby complex models are trained directly on data, providing direct access to probe the underlying physics.Comment: 6 pages, 2 tables, 2 figures. v2: updated to match PRD versio

    OmniFold: A Method to Simultaneously Unfold All Observables

    Full text link
    Collider data must be corrected for detector effects ("unfolded") to be compared with many theoretical calculations and measurements from other experiments. Unfolding is traditionally done for individual, binned observables without including all information relevant for characterizing the detector response. We introduce OmniFold, an unfolding method that iteratively reweights a simulated dataset, using machine learning to capitalize on all available information. Our approach is unbinned, works for arbitrarily high-dimensional data, and naturally incorporates information from the full phase space. We illustrate this technique on a realistic jet substructure example from the Large Hadron Collider and compare it to standard binned unfolding methods. This new paradigm enables the simultaneous measurement of all observables, including those not yet invented at the time of the analysis.Comment: 8 pages, 3 figures, 1 table, 1 poem; v2: updated to approximate PRL versio

    A rescaled method for RBF approximation

    Full text link
    In the recent paper [8], a new method to compute stable kernel-based interpolants has been presented. This \textit{rescaled interpolation} method combines the standard kernel interpolation with a properly defined rescaling operation, which smooths the oscillations of the interpolant. Although promising, this procedure lacks a systematic theoretical investigation. Through our analysis, this novel method can be understood as standard kernel interpolation by means of a properly rescaled kernel. This point of view allow us to consider its error and stability properties

    A rescaled method for RBF approximation

    Get PDF
    A new method to compute stable kernel-based interpolants has been presented by the second and third authors. This rescaled interpolation method combines the standard kernel interpolation with a properly defined rescaling operation, which smooths the oscillations of the interpolant. Although promising, this procedure lacks a systematic theoretical investigation. Through our analysis, this novel method can be understood as standard kernel interpolation by means of a properly rescaled kernel. This point of view allow us to consider its error and stability properties. First, we prove that the method is an instance of the Shepard\u2019s method, when certain weight functions are used. In particular, the method can reproduce constant functions. Second, it is possible to define a modified set of cardinal functions strictly related to the ones of the not-rescaled kernel. Through these functions, we define a Lebesgue function for the rescaled interpolation process, and study its maximum - the Lebesgue constant - in different settings. Also, a preliminary theoretical result on the estimation of the interpolation error is presented. As an application, we couple our method with a partition of unity algorithm. This setting seems to be the most promising, and we illustrate its behavior with some experiments

    A window into a public programme for prevention of mother-to-child transmission of HIV: evidence from a prospective clinical trial

    Get PDF
    Objectives. To evaluate efficacy of the antenatal, intrapartum and postnatal antiretroviral components of a public service prevention of mother-to-child (PMTCT) programme in infants.Design. Analysis of prospectively collected screening data of demographic and MTCT-related interventions and HIV infection status of infants identified through HIV-specific DNA polymerase chain reaction.Setting. Tygerberg Childrenā€™s Hospital, Western Cape, South Africa.Subjects. HIV-infected women and their infants identified through participation in a public service PMTCT programme were referred for possible participation in a prospective study of isoniazid prophylaxis.Interventions. Key components of the programme include voluntary counselling and testing, administration of zidovudine to the mother from between 28 and 34 weeksā€™ gestation and to the newborn infant for the firstweek, single-dose nevirapine to the mother in labour and to the newborn shortly after birth, and free formula for 6 months.Main outcome measures. Number and percentage of HIV-infected infants and extent of exposure to antenatal, intrapartum and postnatal antiretrovirals.Results. Of 656 infants with a median age of 12.6 weeks, screened between 1 April 2005 through May 2006, 39 were HIV-infected, giving a transmission rate of 5.9% (95% confidence interval (CI) 4.4 - 8.0%). Antenatal prophylaxis was significantly associated with reduced transmission (odds ratio (OR) 0.43 (95% CI 0.21 - 0.94)) as opposed to intrapartum and postpartum components (p=0.85 and p=0.84, respectively). In multivariable analysis the antenatal component remained significant (OR=0.40 (95% CI 0.19 - 0.90)).Conclusions. The antenatal phase is the most important antiretroviral component of the PMTCT programme, allowing most opportunity for intervention
    • ā€¦
    corecore