709 research outputs found

    Recent results from NA48/2 and NA62 experiments at CERN

    Get PDF
    The NA48/2 and NA62-RK\text{NA62}\text{-}{\text{R}_\text{K}} experiments at the CERN SPS collected a large sample of charged kaon decays in flight. NA62-RK\text{NA62}\text{-}{\text{R}_\text{K}} was running in 2007-08 with a highly efficient minimum bias trigger for decays into electrons. A preliminary measurement of the electromagnetic transition form factor slope of the π0\pi^0 from 1.05×1061.05\times 10^{6} fully reconstructed π0\pi^0 Dalitz decays is presented. The obtained value a=(3.70±0.53stat±0.36syst)×10−2a = (3.70 \pm 0.53_\text{stat} \pm 0.36_\text{syst})\times 10^{-2} represents a 5.8σ5.8\sigma observation of a non-zero slope in the time-like region of momentum transfer. An upper limit on the rate of a lepton number violating decay K±→π∓μ±μ±K^\pm\to\pi^\mp\mu^\pm\mu^\pm is reported from ∼1.6×1011K±\sim 1.6\times 10^{11} K^\pm decays at NA48/2 in 2003-04: B<8.6×10−11\mathcal{B} < 8.6\times 10^{-11} at 90%90 \% CL. Searches for heavy sterile neutrino N4N_4 and inflaton χ\chi resonances in K±→πμμK^\pm\to\pi\mu\mu decays are reported. No signal is observed and upper limits on the products B(K±→μ±N4)B(N4→π∓μ±)\mathcal{B}(K^\pm\to\mu^\pm N_4)\mathcal{B}(N_4\to\pi^\mp\mu^\pm) and B(K±→π±χ)B(χ→μ+μ−)\mathcal{B}(K^\pm\to\pi^\pm \chi)\mathcal{B}(\chi\to\mu^+\mu^-) are set in the range 10−10−10−910^{-10}-10^{-9} for resonance lifetimes up to 100 ps100~\text{ps}. The result of a search for dark photon with the same sample of decays is also reported. In the absence of observed signal, the limits on the mixing parameter ε2\varepsilon^2 in the range 9−70 MeV/c29-70~\text{MeV/c}^{2} are improved.Comment: 10 pages, 6 figures, talk given at HQL 2016, Blacksburg, 22-27 May 201

    Results and prospects for K→πννˉK\to\pi\nu\bar{\nu} at NA62 and KOTO

    Get PDF
    The K→πννˉK\to\pi\nu\bar{\nu} ultra-rare decays are precisely computed in the Standard Model (SM) and are ideal probes for physics beyond the SM. The NA62 experiment at the CERN SPS is designed to measure the charged channel with a precision of 10\%. The statistics collected in 2016 allows to reach the SM sensitivity. The KOTO experiment at J-PARC aims at reaching the SM sensitivity before performing a measurement with ∼100\sim100 signal events. The NA62 preliminary result for the charged channel is presented, together with the current experimental status of the neutral channel and their prospects for the coming years.Comment: 6 pages, 2 figures, Talk presented at the MESON2018 conferenc

    Static Pricing Problems under Mixed Multinomial Logit Demand

    Full text link
    Price differentiation is a common strategy for many transport operators. In this paper, we study a static multiproduct price optimization problem with demand given by a continuous mixed multinomial logit model. To solve this new problem, we design an efficient iterative optimization algorithm that asymptotically converges to the optimal solution. To this end, a linear optimization (LO) problem is formulated, based on the trust-region approach, to find a "good" feasible solution and approximate the problem from below. Another LO problem is designed using piecewise linear relaxations to approximate the optimization problem from above. Then, we develop a new branching method to tighten the optimality gap. Numerical experiments show the effectiveness of our method on a published, non-trivial, parking choice model

    Results and prospects for K → πνν¯- at NA62 and KOTO

    Get PDF
    The K → πνν¯ ultra-rare decays are precisely computed in the Standard Model (SM) and are ideal probes for physics beyond the SM. The NA62 experiment at the CERN SPS is designed to measure the charged channel with a precision of 10%. The statistics collected in 2016 allows to reach the SM sensitivity. The KOTO experiment at J-PARC aims at reaching the SM sensitivity before performing a measurement with ~100 signal events. The NA62 preliminary result for the charged channel is presented, together with the current experimental status of the neutral channel and their prospects for the coming years

    Heavy neutrino searches and NA62 status

    Full text link
    The NA62 experiment at CERN SPS recorded in 2007 a large sample of K+→μ+νμK^+\to\mu^+\nu_\mu decays. A peak search in the missing mass spectrum of this decay is performed. In the absence of observed signal, the limits obtained on B(K+→μ+νh)\mathcal{B}(K^+\to\mu^+\nu_h) and on the mixing matrix element ∣Uμ4∣2|U_{\mu4}|^2 are reported. The upgraded NA62 experiment started data taking in 2015, with the aim of measuring the branching fraction of the K+→π+ννˉK+\to\pi^+\nu\bar{\nu} decay. An update on the status of the experiment is presented.Comment: 8 pages, 7 figures, Talk given at 52nd Rencontres de Moriond (EW session), La Thuile, 18-25 March 201

    Fighting pickpocketing using a choice-based resource allocation model

    Get PDF
    Inspired by European actions to fight organized crimes, we develop a choice-based resource allocation model that can help policy makers to reduce the number of pickpocket attempts. In this model, the policy maker needs to allocate a limited budget over local and central protective resources as well as over potential pickpocket locations, while keeping in mind the thieves’ random preferences towards potential pickpocket locations. We prove that the optimal budget allocation is proportional in (i) the thieves’ sensitivity towards protective resources and (ii) the initial attractiveness of the potential pickpocket locations. On top of this, we also study two alternatives of our choice-based resource allocation model: one where pickpocket probabilities are enforced to be equal over the pickpocket locations, and one where the decision-making process of the thief becomes deterministic, with known preferences, as observed by the policy maker. For both alternatives, we also derive the optimal budget allocation and compare it with the initial budget allocation using numerical experiments. Finally, we illustrate how these optimal budget allocations perform against various others budget allocations, proposed by policy makers from the field.</p

    Enhancing Discrete Choice Models with Representation Learning

    Full text link
    In discrete choice modeling (DCM), model misspecifications may lead to limited predictability and biased parameter estimates. In this paper, we propose a new approach for estimating choice models in which we divide the systematic part of the utility specification into (i) a knowledge-driven part, and (ii) a data-driven one, which learns a new representation from available explanatory variables. Our formulation increases the predictive power of standard DCM without sacrificing their interpretability. We show the effectiveness of our formulation by augmenting the utility specification of the Multinomial Logit (MNL) and the Nested Logit (NL) models with a new non-linear representation arising from a Neural Network (NN), leading to new choice models referred to as the Learning Multinomial Logit (L-MNL) and Learning Nested Logit (L-NL) models. Using multiple publicly available datasets based on revealed and stated preferences, we show that our models outperform the traditional ones, both in terms of predictive performance and accuracy in parameter estimation. All source code of the models are shared to promote open science.Comment: 35 pages, 12 tables, 6 figures, +11 p. Appendi

    Stochastic Optimization with Adaptive Batch Size: Discrete Choice Models as a Case Study

    Get PDF
    The 2.5 quintillion bytes of data created each day brings new opportunities, but also new stimulating challenges for the discrete choice community. Opportunities because more and more new and larger data sets will undoubtedly become available in the future. Challenging because insights can only be discovered if models can be estimated, which is not simple on these large datasets. In this paper, inspired by the good practices and the intensive use of stochastic gradient methods in the ML field, we introduce the algorithm called Window Moving Average - Adaptive Batch Size (WMA-ABS) which is used to improve the efficiency of stochastic second-order methods. We present preliminary results that indicate that our algorithms outperform the standard secondorder methods, especially for large datasets. It constitutes a first step to show that stochastic algorithms can finally find their place in the optimization of Discrete Choice Models

    Estimation of discrete choice models with hybrid stochastic adaptive batch size algorithms

    Full text link
    The emergence of Big Data has enabled new research perspectives in the discrete choice community. While the techniques to estimate Machine Learning models on a massive amount of data are well established, these have not yet been fully explored for the estimation of statistical Discrete Choice Models based on the random utility framework. In this article, we provide new ways of dealing with large datasets in the context of Discrete Choice Models. We achieve this by proposing new efficient stochastic optimization algorithms and extensively testing them alongside existing approaches. We develop these algorithms based on three main contributions: the use of a stochastic Hessian, the modification of the batch size, and a change of optimization algorithm depending on the batch size. A comprehensive experimental comparison of fifteen optimization algorithms is conducted across ten benchmark Discrete Choice Model cases. The results indicate that the HAMABS algorithm, a hybrid adaptive batch size stochastic method, is the best performing algorithm across the optimization benchmarks. This algorithm speeds up the optimization time by a factor of 23 on the largest model compared to existing algorithms used in practice. The integration of the new algorithms in Discrete Choice Models estimation software will significantly reduce the time required for model estimation and therefore enable researchers and practitioners to explore new approaches for the specification of choice models.Comment: 43 page

    Neutral pion transition form factor measurement and run control at the NA62 experiment

    Get PDF
    The measurement of the π0 electromagnetic transition form factor (TFF) slope a is performed in the time-like region of momentum transfer using a sample of 1.1 x 106 π0→ e+e-y Dalitz decay collected at the NA62-RK experiment in 2007. The event selection, the fit procedure and the study of the systematic effects are presented. The final result obtained a = (3.68 ± 0.51stat ± 0.25syst) X 10- 2 is the most precise to date and represents the first evidence of a non-zero π0 TFF slope with more than 3σ. The NA62 experiment based at the CERN SPS is currently taking data and aims at measuring the branching fraction of the K→ πvv ultra-rare decay with 10% precision and less than 10% background. A complex trigger and data acquisition system is in place to record the data collected by the various detectors in use to reach this goal. The Run Control system of the experiment is meant to supervise and control them in a simple transparent way. The choices made to address the requirements for the system and the most important aspects of its implementation are discussed
    • …
    corecore