235 research outputs found

    A synoptic comparison of the MHD and the OPAL equations of state

    Full text link
    A detailed comparison is carried out between two popular equations of state (EOS), the Mihalas-Hummer-Dappen (MHD) and the OPAL equations of state, which have found widespread use in solar and stellar modeling during the past two decades. They are parts of two independent efforts to recalculate stellar opacities; the international Opacity Project (OP) and the Livermore-based OPAL project. We examine the difference between the two equations of state in a broad sense, over the whole applicable rho-T range, and for three different chemical mixtures. Such a global comparison highlights both their differences and their similarities. We find that omitting a questionable hard-sphere correction, tau, to the Coulomb interaction in the MHD formulation, greatly improves the agreement between the MHD and OPAL EOS. We also find signs of differences that could stem from quantum effects not yet included in the MHD EOS, and differences in the ionization zones that are probably caused by differences in the mechanisms for pressure ionization. Our analysis do not only give a clearer perception of the limitations of each equation of state for astrophysical applications, but also serve as guidance for future work on the physical issues behind the differences. The outcome should be an improvement of both equations of state.Comment: 33 pages, 26 figures. Corrected discussion of Basu & Antia, 2004, ApJ, 606, L85-L8

    Measurements of solar irradiance and effective temperature as a probe of solar interior magnetic fields

    Get PDF
    We argue that a variety of solar data suggest that the activity-cycle timescale variability of the total irradiance, is produced by structural adjustments of the solar interior. Assuming these adjustments are induced by variations of internal magnetic fields, we use measurements of the total irradiance and effective temperature over the period from 1978 to 1992, to infer the magnitude and location of the magnetic field. Using an updated stellar evolution model, which includes magnetic fields, we find that the observations can be explained by fields whose peak values range from 120k to 2.3k gauss, located in the convection zone between 0.959R_{\sun} and 0.997R_{\sun}, respectively. The corresponding maximal radius changes, are 17 km when the magnetic field is located at 0.959R_{\sun} and 3 km when it is located at 0.997R_{\sun}. At these depths, the WW parameter(defined by ΔlnR/ΔlnL\Delta \ln R / \Delta \ln L, where RR and LL are the radius and luminosity) ranges from 0.02 to 0.006. All these predictions are consistent with helioseismology and recent measurements carried out by the MDI experiment on SOHO.Comment: 8 pages, 8 figures, to appear in Ap

    Recent Advances in Modeling Stellar Interiors

    Full text link
    Advances in stellar interior modeling are being driven by new data from large-scale surveys and high-precision photometric and spectroscopic observations. Here we focus on single stars in normal evolutionary phases; we will not discuss the many advances in modeling star formation, interacting binaries, supernovae, or neutron stars. We review briefly: 1) updates to input physics of stellar models; 2) progress in two and three-dimensional evolution and hydrodynamic models; 3) insights from oscillation data used to infer stellar interior structure and validate model predictions (asteroseismology). We close by highlighting a few outstanding problems, e.g., the driving mechanisms for hybrid gamma Dor/delta Sct star pulsations, the cause of giant eruptions seen in luminous blue variables such as eta Car and P Cyg, and the solar abundance problem.Comment: Proceedings for invited talk at conference High Energy Density Laboratory Astrophysics 2010, Caltech, March 2010, submitted for special issue of Astrophysics and Space Science; 7 pages; 5 figure

    New Insights into White-Light Flare Emission from Radiative-Hydrodynamic Modeling of a Chromospheric Condensation

    Full text link
    (abridged) The heating mechanism at high densities during M dwarf flares is poorly understood. Spectra of M dwarf flares in the optical and near-ultraviolet wavelength regimes have revealed three continuum components during the impulsive phase: 1) an energetically dominant blackbody component with a color temperature of T \sim 10,000 K in the blue-optical, 2) a smaller amount of Balmer continuum emission in the near-ultraviolet at lambda << 3646 Angstroms and 3) an apparent pseudo-continuum of blended high-order Balmer lines. These properties are not reproduced by models that employ a typical "solar-type" flare heating level in nonthermal electrons, and therefore our understanding of these spectra is limited to a phenomenological interpretation. We present a new 1D radiative-hydrodynamic model of an M dwarf flare from precipitating nonthermal electrons with a large energy flux of 101310^{13} erg cm2^{-2} s1^{-1}. The simulation produces bright continuum emission from a dense, hot chromospheric condensation. For the first time, the observed color temperature and Balmer jump ratio are produced self-consistently in a radiative-hydrodynamic flare model. We find that a T \sim 10,000 K blackbody-like continuum component and a small Balmer jump ratio result from optically thick Balmer and Paschen recombination radiation, and thus the properties of the flux spectrum are caused by blue light escaping over a larger physical depth range compared to red and near-ultraviolet light. To model the near-ultraviolet pseudo-continuum previously attributed to overlapping Balmer lines, we include the extra Balmer continuum opacity from Landau-Zener transitions that result from merged, high order energy levels of hydrogen in a dense, partially ionized atmosphere. This reveals a new diagnostic of ambient charge density in the densest regions of the atmosphere that are heated during dMe and solar flares.Comment: 50 pages, 2 tables, 13 figures. Accepted for publication in the Solar Physics Topical Issue, "Solar and Stellar Flares". Version 2 (June 22, 2015): updated to include comments by Guest Editor. The final publication is available at Springer via http://dx.doi.org/10.1007/s11207-015-0708-

    Graph Neural Networks for low-energy event classification & reconstruction in IceCube

    Get PDF
    IceCube, a cubic-kilometer array of optical sensors built to detect atmospheric and astrophysical neutrinos between 1 GeV and 1 PeV, is deployed 1.45 km to 2.45 km below the surface of the ice sheet at the South Pole. The classification and reconstruction of events from the in-ice detectors play a central role in the analysis of data from IceCube. Reconstructing and classifying events is a challenge due to the irregular detector geometry, inhomogeneous scattering and absorption of light in the ice and, below 100 GeV, the relatively low number of signal photons produced per event. To address this challenge, it is possible to represent IceCube events as point cloud graphs and use a Graph Neural Network (GNN) as the classification and reconstruction method. The GNN is capable of distinguishing neutrino events from cosmic-ray backgrounds, classifying different neutrino event types, and reconstructing the deposited energy, direction and interaction vertex. Based on simulation, we provide a comparison in the 1 GeV–100 GeV energy range to the current state-of-the-art maximum likelihood techniques used in current IceCube analyses, including the effects of known systematic uncertainties. For neutrino event classification, the GNN increases the signal efficiency by 18% at a fixed background rate, compared to current IceCube methods. Alternatively, the GNN offers a reduction of the background (i.e. false positive) rate by over a factor 8 (to below half a percent) at a fixed signal efficiency. For the reconstruction of energy, direction, and interaction vertex, the resolution improves by an average of 13%–20% compared to current maximum likelihood techniques in the energy range of 1 GeV–30 GeV. The GNN, when run on a GPU, is capable of processing IceCube events at a rate nearly double of the median IceCube trigger rate of 2.7 kHz, which opens the possibility of using low energy neutrinos in online searches for transient events.Peer Reviewe

    Testing Hadronic Interaction Models with Cosmic Ray Measurements at the IceCube Neutrino Observatory

    Get PDF
    The IceCube Neutrino Observatory provides the opportunity to perform unique measurements of cosmic-ray air showers with its combination of a surface array and a deep detector. Electromagnetic particles and low-energy muons (∼GeV) are detected by IceTop, while a bundle of high-energy muons (>~400 GeV) can be measured in coincidence in IceCube. Predictions of air-shower observables based on simulations show a strong dependence on the choice of the high-energy hadronic interaction model. By reconstructing different composition-dependent observables, one can provide strong tests of hadronic interaction models, as these measurements should be consistent with one another. In this work, we present an analysis of air-shower data between 2.5 and 80 PeV, comparing the composition interpretation of measurements of the surface muon density, the slope of the IceTop lateral distribution function, and the energy loss of the muon bundle, using the models Sibyll 2.1, QGSJet-II.04 and EPOS-LHC. We observe inconsistencies in all models under consideration, suggesting they do not give an adequate description of experimental data. The results furthermore imply a significant uncertainty in the determination of the cosmic-ray mass composition through indirect measurements

    Combining Maximum-Likelihood with Deep Learning for Event Reconstruction in IceCube

    Get PDF
    The field of deep learning has become increasingly important for particle physics experiments, yielding a multitude of advances, predominantly in event classification and reconstruction tasks. Many of these applications have been adopted from other domains. However, data in the field of physics are unique in the context of machine learning, insofar as their generation process and the laws and symmetries they abide by are usually well understood. Most commonly used deep learning architectures fail at utilizing this available information. In contrast, more traditional likelihood-based methods are capable of exploiting domain knowledge, but they are often limited by computational complexity. In this contribution, a hybrid approach is presented that utilizes generative neural networks to approximate the likelihood, which may then be used in a traditional maximum-likelihood setting. Domain knowledge, such as invariances and detector characteristics, can easily be incorporated in this approach. The hybrid approach is illustrated by the example of event reconstruction in IceCube

    A Search for Neutrinos from Decaying Dark Matter in Galaxy Clusters and Galaxies with IceCube

    Get PDF
    The observed dark matter abundance in the Universe can be explained with non-thermal, heavy dark matter models. In order for dark matter to still be present today, its lifetime has to far exceed the age of the Universe. In these scenarios, dark matter decay can produce highly energetic neutrinos, along with other Standard Model particles. To date, the IceCube Neutrino Observatory is the world’s largest neutrino telescope, located at the geographic South Pole. In 2013, the IceCube collaboration reported the first observation of high-energy astrophysical neutrinos. Since then, IceCube has collected a large amount of astrophysical neutrino data with energies up to tens of PeV, allowing us to probe the heavy dark matter models using neutrinos. We search the IceCube data for neutrinos from decaying dark matter in galaxy clusters and galaxies. The targeted dark matter masses range from 10 TeV to 10 PeV. In this contribution, we present the method and sensitivities of the analysis

    New Flux Limits in the Low Relativistic Regime for Magnetic Monopoles at IceCube

    Get PDF
    Magnetic monopoles are hypothetical particles that carry magnetic charge. Depending on their velocity, different light production mechanisms exist to facilitate detection. In this work, a previously unused light production mechanism, luminescence of ice, is introduced. This light production mechanism is nearly independent of the velocity of the incident magnetic monopole and becomes the only viable light production mechanism in the low relativistic regime (0.1-0.55c). An analysis in the low relativistic regime searching for magnetic monopoles in seven years of IceCube data is presented. While no magnetic monopole detection can be claimed, a new flux limit in the low relativistic regime is presented, superseding the previous best flux limit by 2 orders of magnitude

    Indirect search for dark matter in the Galactic Centre with IceCube

    Get PDF
    Even though there are strong astrophysical and cosmological indications to support the existence of dark matter, its exact nature remains unknown. We expect dark matter to produce standard model particles when annihilating or decaying, assuming that it is composed of Weakly Interacting Massive Particles (WIMPs). These standard model particles could in turn yield neutrinos that can be detected by the IceCube neutrino telescope. The Milky Way is expected to be permeated by a dark matter halo with an increased density towards its centre. This halo is expected to yield the strongest dark matter annihilation signal at Earth coming from any celestial object, making it an ideal target for indirect searches. In this contribution, we present the sensitivities of an indirect search for dark matter in the Galactic Centre using IceCube data. This low energy dark matter search allows us to cover dark matter masses ranging from 5 GeV to 1 TeV. The sensitivities obtained for this analysis show considerable improvements over previous IceCube results in the considered energy range
    corecore