262 research outputs found
Genes for de novo biosynthesis of omega-3 polyunsaturated fatty acids are widespread in animals
Marine ecosystems are responsible for virtually all production of omega-3 (ω3) long-chain polyunsaturated fatty acids (PUFA), which are essential nutrients for vertebrates. Current consensus is that marine microbes account for this production, given their possession of key enzymes including methyl-end (or "ωx") desaturases. ωx desaturases have also been described in a small number of invertebrate animals, but their precise distribution has not been systematically explored. This study identifies 121 ωx desaturase sequences from 80 species within the Cnidaria, Rotifera, Mollusca, Annelida, and Arthropoda. Horizontal gene transfer has contributed to this hitherto unknown widespread distribution. Functional characterization of animal ωx desaturases provides evidence that multiple invertebrates have the ability to produce ω3 PUFA de novo and further biosynthesize ω3 long-chain PUFA. This finding represents a fundamental revision in our understanding of ω3 long-chain PUFA production in global food webs, by revealing that numerous widespread and abundant invertebrates have the endogenous capacity to make significant contributions beyond that coming from marine microbes. Copyright © 2018 The Authors, some rights reserved.Acknowledgments: We thank A. Magurran and J. Napier for comments on the manuscript
and R. Ruivo for drawings in Figs. 1 and 3. Funding: This work received funding from the
MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland) funded by the Scottish Funding Council (grant reference HR09011), and their support is gratefully acknowledged. Access to the Institute of Aquaculture laboratories was funded by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 262336 (AQUAEXCEL), Transnational Access Project Number 0095/06/03/13
Detecting a stochastic gravitational wave background with the Laser Interferometer Space Antenna
The random superposition of many weak sources will produce a stochastic
background of gravitational waves that may dominate the response of the LISA
(Laser Interferometer Space Antenna) gravitational wave observatory. Unless
something can be done to distinguish between a stochastic background and
detector noise, the two will combine to form an effective noise floor for the
detector. Two methods have been proposed to solve this problem. The first is to
cross-correlate the output of two independent interferometers. The second is an
ingenious scheme for monitoring the instrument noise by operating LISA as a
Sagnac interferometer. Here we derive the optimal orbital alignment for
cross-correlating a pair of LISA detectors, and provide the first analytic
derivation of the Sagnac sensitivity curve.Comment: 9 pages, 11 figures. Significant changes to the noise estimate
An improved method for measuring muon energy using the truncated mean of dE/dx
The measurement of muon energy is critical for many analyses in large
Cherenkov detectors, particularly those that involve separating
extraterrestrial neutrinos from the atmospheric neutrino background. Muon
energy has traditionally been determined by measuring the specific energy loss
(dE/dx) along the muon's path and relating the dE/dx to the muon energy.
Because high-energy muons (E_mu > 1 TeV) lose energy randomly, the spread in
dE/dx values is quite large, leading to a typical energy resolution of 0.29 in
log10(E_mu) for a muon observed over a 1 km path length in the IceCube
detector. In this paper, we present an improved method that uses a truncated
mean and other techniques to determine the muon energy. The muon track is
divided into separate segments with individual dE/dx values. The elimination of
segments with the highest dE/dx results in an overall dE/dx that is more
closely correlated to the muon energy. This method results in an energy
resolution of 0.22 in log10(E_mu), which gives a 26% improvement. This
technique is applicable to any large water or ice detector and potentially to
large scintillator or liquid argon detectors.Comment: 12 pages, 16 figure
All-particle cosmic ray energy spectrum measured with 26 IceTop stations
We report on a measurement of the cosmic ray energy spectrum with the IceTop
air shower array, the surface component of the IceCube Neutrino Observatory at
the South Pole. The data used in this analysis were taken between June and
October, 2007, with 26 surface stations operational at that time, corresponding
to about one third of the final array. The fiducial area used in this analysis
was 0.122 km^2. The analysis investigated the energy spectrum from 1 to 100 PeV
measured for three different zenith angle ranges between 0{\deg} and 46{\deg}.
Because of the isotropy of cosmic rays in this energy range the spectra from
all zenith angle intervals have to agree. The cosmic-ray energy spectrum was
determined under different assumptions on the primary mass composition. Good
agreement of spectra in the three zenith angle ranges was found for the
assumption of pure proton and a simple two-component model. For zenith angles
{\theta} < 30{\deg}, where the mass dependence is smallest, the knee in the
cosmic ray energy spectrum was observed between 3.5 and 4.32 PeV, depending on
composition assumption. Spectral indices above the knee range from -3.08 to
-3.11 depending on primary mass composition assumption. Moreover, an indication
of a flattening of the spectrum above 22 PeV were observed.Comment: 38 pages, 17 figure
Decoherence and coherent population transfer between two coupled systems
We show that an arbitrary system described by two dipole moments exhibits coherent superpositions of internal states that can be completely decoupled fi om the dissipative interactions (responsible for decoherence) and an external driving laser field. These superpositions, known as dark or trapping states, can he completely stable or can coherently interact with the remaining states. We examine the master equation describing the dissipative evolution of the system and identify conditions for population trapping and also classify processes that can transfer the population to these undriven and nondecaying states. It is shown that coherent transfers are possible only if the two systems are nonidentical, that is the transitions have different frequencies and/or decay rates. in particular, we find that the trapping conditions can involve both coherent and dissipative interactions, and depending on the energy level structure of the system, the population can be trapped in a linear superposition of two or more bare states, a dressed state corresponding to an eigenstate of the system plus external fields or, in some cases. in one of the excited states of the system. A comprehensive analysis is presented of the different processes that are responsible for population trapping, and we illustrate these ideas with three examples of two coupled systems: single V- and Lambda-type three-level atoms and two nonidentical tao-level atoms, which are known to exhibit dark states. We show that the effect of population trapping does not necessarily require decoupling of the antisymmetric superposition from the dissipative interactions. We also find that the vacuum-induced coherent coupling between the systems could be easily observed in Lambda-type atoms. Our analysis of the population trapping in two nonidentical atoms shows that the atoms can be driven into a maximally entangled state which is completely decoupled from the dissipative interaction
Graph Neural Networks for low-energy event classification & reconstruction in IceCube
IceCube, a cubic-kilometer array of optical sensors built to detect atmospheric and astrophysical neutrinos between 1 GeV and 1 PeV, is deployed 1.45 km to 2.45 km below the surface of the ice sheet at the South Pole. The classification and reconstruction of events from the in-ice detectors play a central role in the analysis of data from IceCube. Reconstructing and classifying events is a challenge due to the irregular detector geometry, inhomogeneous scattering and absorption of light in the ice and, below 100 GeV, the relatively low number of signal photons produced per event. To address this challenge, it is possible to represent IceCube events as point cloud graphs and use a Graph Neural Network (GNN) as the classification and reconstruction method. The GNN is capable of distinguishing neutrino events from cosmic-ray backgrounds, classifying different neutrino event types, and reconstructing the deposited energy, direction and interaction vertex. Based on simulation, we provide a comparison in the 1 GeV–100 GeV energy range to the current state-of-the-art maximum likelihood techniques used in current IceCube analyses, including the effects of known systematic uncertainties. For neutrino event classification, the GNN increases the signal efficiency by 18% at a fixed background rate, compared to current IceCube methods. Alternatively, the GNN offers a reduction of the background (i.e. false positive) rate by over a factor 8 (to below half a percent) at a fixed signal efficiency. For the reconstruction of energy, direction, and interaction vertex, the resolution improves by an average of 13%–20% compared to current maximum likelihood techniques in the energy range of 1 GeV–30 GeV. The GNN, when run on a GPU, is capable of processing IceCube events at a rate nearly double of the median IceCube trigger rate of 2.7 kHz, which opens the possibility of using low energy neutrinos in online searches for transient events.Peer Reviewe
SND@LHC: The Scattering and Neutrino Detector at the LHC
SND@LHC is a compact and stand-alone experiment designed to perform measurements with neutrinos produced at the LHC in the pseudo-rapidity region of . The experiment is located 480 m downstream of the ATLAS interaction point, in the TI18 tunnel. The detector is composed of a hybrid system based on an 830 kg target made of tungsten plates, interleaved with emulsion and electronic trackers, also acting as an electromagnetic calorimeter, and followed by a hadronic calorimeter and a muon identification system. The detector is able to distinguish interactions of all three neutrino flavours, which allows probing the physics of heavy flavour production at the LHC in the very forward region. This region is of particular interest for future circular colliders and for very high energy astrophysical neutrino experiments. The detector is also able to search for the scattering of Feebly Interacting Particles. In its first phase, the detector will operate throughout LHC Run 3 and collect a total of 250
Neutrino oscillation studies with IceCube-DeepCore
AbstractIceCube, a gigaton-scale neutrino detector located at the South Pole, was primarily designed to search for astrophysical neutrinos with energies of PeV and higher. This goal has been achieved with the detection of the highest energy neutrinos to date. At the other end of the energy spectrum, the DeepCore extension lowers the energy threshold of the detector to approximately 10 GeV and opens the door for oscillation studies using atmospheric neutrinos. An analysis of the disappearance of these neutrinos has been completed, with the results produced being complementary with dedicated oscillation experiments. Following a review of the detector principle and performance, the method used to make these calculations, as well as the results, is detailed. Finally, the future prospects of IceCube-DeepCore and the next generation of neutrino experiments at the South Pole (IceCube-Gen2, specifically the PINGU sub-detector) are briefly discussed
A muon-track reconstruction exploiting stochastic losses for large-scale Cherenkov detectors
IceCube is a cubic-kilometer Cherenkov telescope operating at the South Pole. The main goal of IceCube is the detection of astrophysical neutrinos and the identification of their sources. High-energy muon neutrinos are observed via the secondary muons produced in charge current interactions with nuclei in the ice. Currently, the best performing muon track directional reconstruction is based on a maximum likelihood method using the arrival time distribution of Cherenkov photons registered by the experiment\u27s photomultipliers. A known systematic shortcoming of the prevailing method is to assume a continuous energy loss along the muon track. However at energies >1 TeV the light yield from muons is dominated by stochastic showers. This paper discusses a generalized ansatz where the expected arrival time distribution is parametrized by a stochastic muon energy loss pattern. This more realistic parametrization of the loss profile leads to an improvement of the muon angular resolution of up to 20% for through-going tracks and up to a factor 2 for starting tracks over existing algorithms. Additionally, the procedure to estimate the directional reconstruction uncertainty has been improved to be more robust against numerical errors
Galaxy Clusters Associated with Short GRBs. II. Predictions for the Rate of Short GRBs in Field and Cluster Early-Type Galaxies
We determine the relative rates of short GRBs in cluster and field early-type
galaxies as a function of the age probability distribution of their
progenitors, P(\tau) \propto \tau^n. This analysis takes advantage of the
difference in the growth of stellar mass in clusters and in the field, which
arises from the combined effects of the galaxy stellar mass function, the
early-type fraction, and the dependence of star formation history on mass and
environment. This approach complements the use of the early- to late-type host
galaxy ratio, with the added benefit that the star formation histories of
early-type galaxies are simpler than those of late-type galaxies, and any
systematic differences between progenitors in early- and late-type galaxies are
removed. We find that the ratio varies from R(cluster)/R(field) ~ 0.5 for n =
-2 to ~ 3 for n = 2. Current observations indicate a ratio of about 2,
corresponding to n ~ 0 - 1. This is similar to the value inferred from the
ratio of short GRBs in early- and late-type hosts, but it differs from the
value of n ~ -1 for NS binaries in the Milky Way. We stress that this general
approach can be easily modified with improved knowledge of the effects of
environment and mass on the build-up of stellar mass, as well as the effect of
globular clusters on the short GRB rate. It can also be used to assess the age
distribution of Type Ia supernova progenitors.Comment: ApJ accepted versio
- …