11 research outputs found

    Testing the Origin of High-Energy Cosmic Rays

    Full text link
    Recent accurate measurements of cosmic-ray (CR) protons and nuclei by ATIC-2, CREAM, and PAMELA reveal: a) unexpected spectral hardening in the spectra of CR species above a few hundred GeV per nucleon, b) a harder spectrum of He compared to protons, and c) softening of the CR spectra just below the break energy. These newly-discovered features may offer a clue to the origin of the observed high-energy Galactic CRs. We discuss possible interpretations of these spectral features and make predictions for the secondary CR fluxes and secondary to primary ratios, anisotropy of CRs, and diffuse Galactic {\gamma}-ray emission in different phenomenological scenarios. Our predictions can be tested by currently running or near-future high-energy astrophysics experiments.Comment: 23 pages, 14 color figures. Accepted for publication in Ap

    A comparison of optimisation algorithms for high-dimensional particle and astrophysics applications

    Get PDF
    Optimisation problems are ubiquitous in particle and astrophysics, and involve locating the optimum of a complicated function of many parameters that may be computationally expensive to evaluate. We describe a number of global optimisation algorithms that are not yet widely used in particle astrophysics, benchmark them against random sampling and existing techniques, and perform a detailed comparison of their performance on a range of test functions. These include four analytic test functions of varying dimensionality, and a realistic example derived from a recent global fit of weak-scale supersymmetry. Although the best algorithm to use depends on the function being investigated, we are able to present general conclusions about the relative merits of random sampling, Differential Evolution, Particle Swarm Optimisation, the Covariance Matrix Adaptation Evolution Strategy, Bayesian Optimisation, Grey Wolf Optimisation, and the PyGMO Artificial Bee Colony, Gaussian Particle Filter and Adaptive Memory Programming for Global Optimisation algorithms

    Mind the gap: The discrepancy between simulation and reality drives interpretations of the Galactic Center Excess

    No full text
    The origin of the so-called Galactic Center Excess in GeV gamma rays has been debated for more than 10 years. What makes this excess so interesting is the possibility of interpreting it as additional radiation consistent with that expected from dark matter annihilation. Alternatively, the excess could come from undetected point sources. In this work, we examine the following questions: Since the majority of the previously reported interpretations of this excess are highly dependent on the simulation, how does the model used for the simulation affect the interpretations? Are such uncertainties taken into account? When different models lead to different conclusions, there may be a general gap between these simulations and reality that influences our conclusions. To investigate these questions, we build an ultra-fast and powerful inference pipeline based on convolutional deep ensemble networks and test the interpretations with a wide range of different models to simulate the excess. We find that our conclusions (dark matter or not) strongly depend on the type of simulation and that this is not revealed by systematic uncertainties. Furthermore, we measure whether reality lies in the simulation parameter space and conclude that there is a gap to reality in all simulated models. Our approach offers a means to assess the severity of the reality gap in future works. Our work questions the validity of conclusions (dark matter) drawn about the GCE in other works: Has the reality gap been closed and at the same time is the model correct

    Mind the gap: The discrepancy between simulation and reality drives interpretations of the Galactic Center Excess

    Get PDF
    The origin of the so-called Galactic Center Excess in GeV gamma rays has been debated for more than 10 years. What makes this excess so interesting is the possibility of interpreting it as additional radiation consistent with that expected from dark matter annihilation. Alternatively, the excess could come from undetected point sources. In this work, we examine the following questions: Since the majority of the previously reported interpretations of this excess are highly dependent on the simulation, how does the model used for the simulation affect the interpretations? Are such uncertainties taken into account? When different models lead to different conclusions, there may be a general gap between these simulations and reality that influences our conclusions. To investigate these questions, we build an ultra-fast and powerful inference pipeline based on convolutional deep ensemble networks and test the interpretations with a wide range of different models to simulate the excess. We find that our conclusions (dark matter or not) strongly depend on the type of simulation and that this is not revealed by systematic uncertainties. Furthermore, we measure whether reality lies in the simulation parameter space and conclude that there is a gap to reality in all simulated models. Our approach offers a means to assess the severity of the reality gap in future works. Our work questions the validity of conclusions (dark matter) drawn about the GCE in other works: Has the reality gap been closed and at the same time is the model correct

    Identification of point sources in gamma rays using U-shaped convolutional neural networks and a data challenge

    No full text
    International audienceContext. At GeV energies, the sky is dominated by the interstellar emission from the Galaxy. With limited statistics and spatial resolution, accurately separating point sources is therefore challenging.Aims. Here we present the first application of deep learning based algorithms to automatically detect and classify point sources from gamma-ray data. For concreteness we refer to this approach as AutoSourceID.Methods. To detect point sources, we utilized U-shaped convolutional networks for image segmentation and k-means for source clustering and localization. We also explored the Centroid-Net algorithm, which is designed to find and count objects. Using two algorithms allows for a cross check of the results, while a combination of their results can be used to improve performance. The training data are based on 9.5 years of exposure from The Fermi Large Area Telescope (Fermi-LAT) and we used source properties of active galactic nuclei (AGNs) and pulsars (PSRs) from the fourth Fermi-LAT source catalog in addition to several models of background interstellar emission. The results of the localization algorithm are fed into a classification neural network that is trained to separate the three general source classes (AGNs, PSRs, and FAKE sources).Results. We compared our localization algorithms qualitatively with traditional methods and find them to have similar detection thresholds. We also demonstrate the robustness of our source localization algorithms to modifications in the interstellar emission models, which presents a clear advantage over traditional methods. The classification network is able to discriminate between the three classes with typical accuracy of ∼70%, as long as balanced data sets are used in classification training. We published online our training data sets and analysis scripts and invite the community to join the data challenge aimed to improve the localization and classification of gamma-ray point sources.Key words: catalogs / gamma rays: general / astroparticle physics / methods: numerical / methods: data analysis / techniques: image processing⋆ https://github.com/bapanes/AutoSourceI

    AutoSourceID-Light. Fast Optical Source Localization via U-Net and Laplacian of Gaussian

    Full text link
    Aims\textbf{Aims}. With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical images. Methods\textbf{Methods}. We show that the AutoSourceID-Light algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter (Chen et al. 1987) enables outstanding performances in the localization of sources. A U-Net (Ronneberger et al. 2015) network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location. Results\textbf{Results}. Application on optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with the widely used SExtractor (Bertin & Arnouts 1996) and show the out-performances of our method. AutoSourceID-Light rapidly detects more sources not only in low and mid crowded fields, but particularly in areas with more than 150 sources per square arcminute
    corecore