28 research outputs found

    Galactic Shapiro Delay to the Crab Pulsar and limit on Einstein's Equivalence Principle Violation

    Get PDF
    We calculate the total galactic Shapiro delay to the Crab pulsar by including the contributions from the dark matter as well as baryonic matter along the line of sight. The total delay due to dark matter potential is about 3.4 days. For baryonic matter, we included the contributions from both the bulge and the disk, which are approximately 0.12 and 0.32 days respectively. The total delay from all the matter distribution is therefore 3.84 days. We also calculate the limit on violations of Einstein's equivalence principle by using observations of "nano-shot" giant pulses from the Crab pulsar with time-delay <0.4<0.4~ns as well as using time differences between radio and optical photons observed from this pulsar. Using the former, we obtain a limit on violation of Einstein's equivalence principle in terms of the PPN parameter Δγ<2.41×1015\Delta \gamma < 2.41\times 10^{-15}. From the time-difference between simultaneous optical and radio observations, we get Δγ<1.54×109\Delta \gamma < 1.54\times 10^{-9}. We also point out differences in our calculation of Shapiro delay and that from two recent papers (arXiv:1612.00717 and arXiv:1608.07657), which used the same observations to obtain a corresponding limit on Δγ\Delta \gamma.Comment: 4 pages, 1 figure. Accepted for publication in Eur. Phys. Journal

    Higher Dimensional Metrics of Colliding Gravitational Plane Waves

    Get PDF
    We give a higher even dimensional extension of vacuum colliding gravitational plane waves with the combinations of collinear and non-collinear polarized four-dimensional metric. The singularity structure of space-time depends on the parameters of the solution.Comment: 4 pages RevTex

    Constraints on differential Shapiro delay between neutrinos and photons from IceCube-170922A

    Full text link
    On 22nd September 2017, the IceCube Collaboration detected a neutrino with energy of about 290 TeV from the direction of the gamma-ray blazar TXS 0506+056, located at a distance of about 1.75 Gpc. During the same time, enhanced gamma-ray flaring was also simultaneously observed from multiple telescopes, giving rise to only the second coincident astrophysical neutrino/photon observation after SN 1987A. We point out that for this event, both neutrinos and photons encountered a Shapiro delay of about 6300 days along the way from the source. From this delay and the relative time difference between the neutrino and photon arrival times, one can constrain violations of Einstein's Weak Equivalence Principle (WEP) for TeV neutrinos. We constrain such violations of WEP using the Parameterized Post-Newtonian (PPN) parameter γ\gamma, which is given by γνγEM<5.5×102|\gamma_{\rm {\nu}}-\gamma_{\rm{EM}}|<5.5 \times 10^{-2}, after assuming time difference of 175 days between neutrino and photon arrival times.Comment: 5 page

    Identifying electrons with deep learning methods

    Full text link
    Cette thèse porte sur les techniques de l’apprentissage machine et leur application à un problème important de la physique des particules expérimentale: l’identification des électrons de signal résultant des collisions proton-proton au Grand collisionneur de hadrons. Au chapitre 1, nous fournissons des informations sur le Grand collisionneur de hadrons et expliquons pourquoi il a été construit. Nous présentons ensuite plus de détails sur ATLAS, l’un des plus importants détecteurs du Grand collisionneur de hadrons. Ensuite, nous expliquons en quoi consiste la tâche d’identification des électrons ainsi que l’importance de bien la mener à terme. Enfin, nous présentons des informations détaillées sur l’ensemble de données que nous utilisons pour résoudre cette tâche d’identification des électrons. Au chapitre 2, nous donnons une brève introduction des principes fondamentaux de l’apprentissage machine. Après avoir défini et introduit les différents types de tâche d’apprentissage, nous discutons des diverses façons de représenter les données d’entrée. Ensuite, nous présentons ce qu’il faut apprendre de ces données et comment y parvenir. Enfin, nous examinons les problèmes qui pourraient se présenter en régime de “sur-apprentissage”. Au chapitres 3, nous motivons le choix de l’architecture choisie pour résoudre notre tâche, en particulier pour les sections où des images séquentielles sont utilisées comme entrées. Nous présentons ensuite les résultats de nos expériences et montrons que notre modèle fonctionne beaucoup mieux que les algorithmes présentement utilisés par la collaboration ATLAS. Enfin, nous discutons des futures orientations afin d’améliorer davantage nos résultats. Au chapitre 4, nous abordons les deux concepts que sont la généralisation hors distribution et la planéité de la surface associée à la fonction de coût. Nous prétendons que les algorithmes qui font converger la fonction coût vers minimum couvrant une région large et plate sont également ceux qui offrent le plus grand potentiel de généralisation pour les tâches hors distribution. Nous présentons les résultats de l’application de ces deux algorithmes à notre ensemble de données et montrons que cela soutient cette affirmation. Nous terminons avec nos conclusions.This thesis is about applying the tools of Machine Learning to an important problem of experimental particle physics: identifying signal electrons after proton-proton collisions at the Large Hadron Collider. In Chapters 1, we provide some information about the Large Hadron Collider and explain why it was built. We give further details about one of the biggest detectors in the Large Hadron Collider, the ATLAS. Then we define what electron identification task is, as well as the importance of solving it. Finally, we give detailed information about our dataset that we use to solve the electron identification task. In Chapters 2, we give a brief introduction to fundamental principles of machine learning. Starting with the definition and types of different learning tasks, we discuss various ways to represent inputs. Then we present what to learn from the inputs as well as how to do it. And finally, we look at the problems that would arise if we “overdo” learning. In Chapters 3, we motivate the choice of the architecture to solve our task, especially for the parts that have sequential images as inputs. We then present the results of our experiments and show that our model performs much better than the existing algorithms that the ATLAS collaboration currently uses. Finally, we discuss future directions to further improve our results. In Chapter 4, we discuss two concepts: out of distribution generalization and flatness of loss surface. We claim that the algorithms, that brings a model into a wide flat minimum of its training loss surface, would generalize better for out of distribution tasks. We give the results of implementing two such algorithms to our dataset and show that it supports our claim. Finally, we end with our conclusions
    corecore