32 research outputs found

    Prediction and understanding of soft proton contamination in XMM-Newton: a machine learning approach

    Get PDF
    One of the major and unfortunately unforeseen sources of background for the current generation of X-ray telescopes are few tens to hundreds of keV (soft) protons concentrated by the mirrors. One such telescope is the European Space Agency's (ESA) X-ray Multi-Mirror Mission (XMM-Newton). Its observing time lost due to background contamination is about 40\%. This loss of observing time affects all the major broad science goals of this observatory, ranging from cosmology to astrophysics of neutron stars and black holes. The soft proton background could dramatically impact future large X-ray missions such as the ESA planned Athena mission (http://www.the-athena-x-ray-observatory.eu/). Physical processes that trigger this background are still poorly understood. We use a Machine Learning (ML) approach to delineate related important parameters and to develop a model to predict the background contamination using 12 years of XMM observations. As predictors we use the location of satellite, solar and geomagnetic activity parameters. We revealed that the contamination is most strongly related to the distance in southern direction, ZZ, (XMM observations were in the southern hemisphere), the solar wind radial velocity and the location on the magnetospheric magnetic field lines. We derived simple empirical models for the first two individual predictors and an ML model which utilizes an ensemble of the predictors (Extra Trees Regressor) and gives better performance. Based on our analysis, future missions should minimize observations during times associated with high solar wind speed and avoid closed magnetic field lines, especially at the dusk flank region in the southern hemisphere.Comment: 20 pages, 11 figure

    Programming Shared Virtual Memory on the Intel Paragon (TM) Supercomputer

    Get PDF
    Programming distributed memory systems forces the user to handle the problem of data locality. With message passing the user has not only to map the application to the different processors in a way that optimizes data locality but also has to explicitly program access to remote data. Shared virtual memory (SVM) systems free him from the second task; nevertheless, he is still responsible to optimize data locality by selecting a well-suited work distribution. We describe a programming environment that is based on the Advanced Shared Virtual Memory system, a SVM implementation for the Paragon (TM) Supercomputer, and on SVM-Fortran, a shared memory parallel programming language with language constructs for work distribution. Programming tools integrate program text and dynamic performance data to help the user in optimizing data locality

    Compiling SVM-Fortran for the Intel Paragon XP/S

    No full text
    SVM-Fortran is a language designed to program highly parallel systems with a global address space. A compiler for SVM-Fortran is described which generates code for parallel machines; our current target machine is the Intel Paragon XP/S with an SVM-extension called ASVM. Performance numbers are given for applications and compared to results obtained with corresponding HPF-versions

    Compiling Data Parallel Languages for Shared Virtual Memory Systems

    No full text
    This deliverable gives a detailed language specification of a data-parallel programming language for shared virtual memory systems. In addition to data parallelism, SVM-Fortran supports task parallelism. The language provides features for locality optimization that are extensions of the HPF template concept. Based on the system-provided global address space in SVM systems, a less restrictive concept can be implemented, i.e. it is not the compilers task to analyze array references. Unlike templates in HPF, SVM-Fortran templates are used for work distribution, e.g. scheduling parallel loops. The advantage of this design is that the computation of the schedule based on the distribution is much cheaper than computing the work distribution based on a reference to a distributed array as it has to be done in HPF. This is especially true for irregular distributions. This deliverable also describes the compile time and run time techniques applied in the implementation of SVM-Fortran on the Inte..

    Parallelizing applications with SVM-Fortran

    No full text
    SVM-Fortran is a language extension of Fortran 77 developed by KFA for shared memory parallel programming on distributed memory systems. It provides special language features for optimization of data locality and load balancing. SVM-Fortran is designed for shared virtual memory systems as well as for highly parallel computers with a hardware-based global address space. The article describes the implementation of SVM-Fortran on the Intel Paragon as well as parallelization aspects and performance results of some applications
    corecore