1 research outputs found

    Search for mixing of D0 and its antiparticle using neural networks

    Get PDF
    Mixing is a process in which a particle spontaneously turns into its antiparticle. The Standard Model of particle physics at the box diagram level predicts that mixing of a D0 should occur approximately once every ten billion decays, while other theories predict much larger mixing rates. Measurement of the mixing rate of the D0 is an important test of the Standard Model. If mixing rates are larger than what the Standard Model predicts, this could be evidence of physics beyond the Standard Model and would be a major physics discovery. Since the D0 has zero electric charge and a lifetime of only 4×10-13 seconds, limitations in current elementary particle detector technology require examination of the decay products of the D0. Some other processes have decay products that are similar to the decay products resulting from the mixing of a D0, so it is necessary to distinguish these processes. In this analysis neural networks are used to help determine whether a D0 decay involved mixing or another process. Neural networks are models of complicated functions that, given a number of inputs, will attempt to predict the value of one or more outputs. In this study, the inputs consist of observables measured by the BaBar detector at the Stanford Linear Accelerator Center. The neural network output is binary, with 1 representing the signal candidates and 0 representing all background events. The signal mode is a D0 decaying into K+, an electron, and an antineutrino, while the D0 normally decays into K-, a positron, and a neutrino. In this study we use the neural network computer program MLPfit. MLPfit requires training data which is used to develop the neural network. This training data consists of inputs that are known to produce a certain output. These neural networks were trained with MLPfit using Monte Carlo simulation data from the BaBar experiment. The goal is to produce a neural network that will provide a cleaner data sample with fewer background events than the neural network that is currently being used in the experiment. Comparisons show neural networks trained by MLPfit are comparable to the neural network that is currently being used in the experiment. Advisor: Richard D. KassSigma XiArts and Sciences Undergraduate Research Scholarshi
    corecore