937 research outputs found
Short term X-ray spectral variability of the quasar PDS 456 observed in a low flux state
We present an analysis of the 2013 Suzaku campaign on the nearby luminous
quasar PDS 456, covering a total duration of ~1 Ms and a net exposure of 455
ks. During these observations, the X-ray flux was suppressed by a factor of >10
in the soft X-ray band when compared to other epochs. We investigated the
broadband continuum by constructing a spectral energy distribution, making use
of the optical/UV photometry and hard X-ray spectra from the later
XMM-Newton/NuSTAR campaign in 2014. The high energy part of this low flux state
cannot be accounted for by self-consistent accretion disc and corona models
without attenuation by absorbing gas, which partially covers a substantial
fraction of the line of sight towards the X-ray source. Two absorption layers
are required, of column density and ,
with average covering factors of ~80% (with typical 5% variations) and 60%
(10-15%), respectively. In these observations PDS 456 displays significant
short term X-ray spectral variability, on timescales of ~100 ks, which can be
accounted for by variable covering of the absorbing gas. The partial covering
absorber prefers an outflow velocity of at
the >99.9% confidence level over the case where . This is
consistent with the velocity of the highly ionised outflow responsible for the
blueshifted iron K absorption profile. We therefore suggest that the partial
covering clouds could be the denser, or clumpy part of an inhomogeneous
accretion disc wind. Finally we estimate the size-scale of the X-ray source
from its variability. The radial extent of the X-ray emitter is found to be of
the order ~15-20 , although the hard X-ray (>2 keV) emission may
originate from a more compact or patchy corona of hot electrons, which is ~6-8
in size.Comment: 38 pages, 13 figures, accepted for publication in MNRA
How to reconstruct the geometry of a Middle Triassic feeding system: clues from clinopyroxene textures in lava flows from Cima Pape (Southern Alps, Italy)
No abstract availabl
Training Curricula for Open Domain Answer Re-Ranking
In precision-oriented tasks like answer ranking, it is more important to rank many relevant answers highly than to retrieve all relevant answers. It follows that a good ranking strategy would be to learn how to identify the easiest correct answers first (i.e., assign a high ranking score to answers that have characteristics that usually indicate relevance, and a low ranking score to those with characteristics that do not), before incorporating more complex logic to handle difficult cases (e.g., semantic matching or reasoning). In this work, we apply this idea to the training of neural answer rankers using curriculum learning. We propose several heuristics to estimate the difficulty of a given training sample. We show that the proposed heuristics can be used to build a training curriculum that down-weights difficult samples early in the training process. As the training process progresses, our approach gradually shifts to weighting all samples equally, regardless of difficulty. We present a comprehensive evaluation of our proposed idea on three answer ranking datasets. Results show that our approach leads to superior performance of two leading neural ranking architectures, namely BERT and ConvKNRM, using both pointwise and pairwise losses. When applied to a BERT-based ranker, our method yields up to a 4% improvement in MRR and a 9% improvement in P@1 (compared to the model trained without a curriculum). This results in models that can achieve comparable performance to more expensive state-of-the-art techniques
Expansion via Prediction of Importance with Contextualization
The identification of relevance with little textual context is a primary challenge in passage retrieval. We address this problem with a representation-based ranking approach that: (1) explicitly models the importance of each term using a contextualized language model; (2) performs passage expansion by propagating the importance to similar terms; and (3) grounds the representations in the lexicon, making them interpretable. Passage representations can be pre-computed at index time to reduce query-time latency. We call our approach EPIC (Expansion via Prediction of Importance with Contextualization). We show that EPIC significantly outperforms prior importance-modeling and document expansion approaches. We also observe that the performance is additive with the current leading first-stage retrieval methods, further narrowing the gap between inexpensive and cost-prohibitive passage ranking approaches. Specifically, EPIC achieves a MRR@10 of 0.304 on the MS-MARCO passage ranking dataset with 78ms average query latency on commodity hardware. We also find that the latency is further reduced to 68ms by pruning document representations, with virtually no difference in effectiveness
Efficient Document Re-Ranking for Transformers by Precomputing Term Representations
Deep pretrained transformer networks are effective at various ranking tasks, such as question answering and ad-hoc document ranking. However, their computational expenses deem them cost-prohibitive in practice. Our proposed approach, called PreTTR (Precomputing Transformer Term Representations), considerably reduces the query-time latency of deep transformer networks (up to a 42x speedup on web document ranking) making these networks more practical to use in a real-time ranking scenario. Specifically, we precompute part of the document term representations at indexing time (without a query), and merge them with the query representation at query time to compute the final ranking score. Due to the large size of the token representations, we also propose an effective approach to reduce the storage requirement by training a compression layer to match attention scores. Our compression technique reduces the storage required up to 95% and it can be applied without a substantial degradation in ranking performance
A deep X-ray view of the bare AGN Ark 120. III. X-ray timing analysis and multiwavelength variability
We present the spectral/timing properties of the bare Seyfert galaxy Ark 120 through a deep ~420ks XMM-Newton campaign plus recent NuSTAR observations and a ~6-month Swift monitoring campaign. We investigate the spectral decomposition through fractional rms, covariance and difference spectra, finding the mid- to long-timescale (~day-year) variability to be dominated by a relatively smooth, steep component, peaking in the soft X-ray band. Additionally, we find evidence for variable FeK emission red-ward of the FeK-alpha core on long timescales, consistent with previous findings. We detect a clearly-defined power spectrum which we model with a power law with a slope of alpha ~ 1.9. By extending the power spectrum to lower frequencies through the inclusion of Swift and RXTE data, we find tentative evidence of a high-frequency break, consistent with existing scaling relations. We also explore frequency-dependent Fourier time lags, detecting a negative ('soft') lag for the first time in this source with the 0.3-1 keV band lagging behind the 1-4 keV band with a time delay of ~900s. Finally, we analyze the variability in the optical and UV bands using the Optical/UV Monitor on-board XMM-Newton and the UVOT on-board Swift and search for time-dependent correlations between the optical/UV/X-ray bands. We find tentative evidence for the U-band emission lagging behind the X-rays with a time delay of 2.4 +/- 1.8 days, which we discuss in the context of disc reprocessing
Numerical solution of the two-dimensional Helmholtz equation with variable coefficients by the radial integration boundary integral and integro-differential equation methods
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2012 Taylor & Francis.This paper presents new formulations of the boundary–domain integral equation (BDIE) and the boundary–domain integro-differential equation (BDIDE) methods for the numerical solution of the two-dimensional Helmholtz equation with variable coefficients. When the material parameters are variable (with constant or variable wave number), a parametrix is adopted to reduce the Helmholtz equation to a BDIE or BDIDE. However, when material parameters are constant (with variable wave number), the standard fundamental solution for the Laplace equation is used in the formulation. The radial integration method is then employed to convert the domain integrals arising in both BDIE and BDIDE methods into equivalent boundary integrals. The resulting formulations lead to pure boundary integral and integro-differential equations with no domain integrals. Numerical examples are presented for several simple problems, for which exact solutions are available, to demonstrate the efficiency of the proposed methods
Numerical solution of the two-dimensional Helmholtz equation with variable coefficients by the radial integration boundary integral and integro-differential equation methods
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2012 Taylor & Francis.This paper presents new formulations of the boundary–domain integral equation (BDIE) and the boundary–domain integro-differential equation (BDIDE) methods for the numerical solution of the two-dimensional Helmholtz equation with variable coefficients. When the material parameters are variable (with constant or variable wave number), a parametrix is adopted to reduce the Helmholtz equation to a BDIE or BDIDE. However, when material parameters are constant (with variable wave number), the standard fundamental solution for the Laplace equation is used in the formulation. The radial integration method is then employed to convert the domain integrals arising in both BDIE and BDIDE methods into equivalent boundary integrals. The resulting formulations lead to pure boundary integral and integro-differential equations with no domain integrals. Numerical examples are presented for several simple problems, for which exact solutions are available, to demonstrate the efficiency of the proposed methods
Minimizing power consumption in virtualized cellular networks
Cellular network nodes should be dynamically switched on/off based on the load requirements of the network, to save power and minimize inter-cell interference. This should be done keeping into account global interference effects, which requires a centralized approach. In this paper, we present an architecture, realized within the Flex5GWare EU project, that manages a large-scale cellular network, switching on and off nodes based on load requirements and context data. We describe the architectural framework and the optimization model that is used to decide the activity state of the nodes. We present simulation results showing that the framework adapts to the minimum power level based on the cell loads
The epigenetics of inflammaging: The contribution of age-related heterochromatin loss and locus-specific remodelling and the modulation by environmental stimuli
A growing amount of evidences indicates that inflammaging - the chronic, low grade inflammation state characteristic of the elderly - is the result of genetic as well as environmental or stochastic factors. Some of these, such as the accumulation of senescent cells that are persistent during aging or accompany its progression, seem to be sufficient to initiate the aging process and to fuel it. Others, like exposure to environmental compounds or infections, are temporary and resolve within a (relatively) short time. In both cases, however, a cellular memory of the event can be established by means of epigenetic modulation of the genome. In this review we will specifically discuss the relationship between epigenetics and inflammaging. In particular, we will show how age-associated epigenetic modifications concerned with heterochromatin loss and gene-specific remodelling, can promote inflammaging. Furthermore, we will recall how the exposure to specific nutritional, environmental and microbial stimuli can affect the rate of inflammaging through epigenetic mechanisms, touching also on the recent insight given by the concept of trained immunity
- …