3,203 research outputs found

    An Advanced Leakage Scheme for Neutrino Treatment in Astrophysical Simulations

    Get PDF
    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive

    Long-term Intrinsic Rhythm Evaluation in Dogs with Atrioventricular Block

    Get PDF
    Background: Atrioventricular block (AVB) is a conduction abnormality along the atrioventricular node that, depending on etiology, may lead to different outcomes. Objectives: To evaluate variations of intrinsic rhythm (IR) in dogs that underwent pacemaker implantation (PMI). Animals: Medical records of 92 dogs affected by 3rd degree atrioventricular block (3AVB), advanced 2nd degree AVB (2AVB), paroxysmal 3AVB, 2:1 2AVB, or 3AVB with atrial fibrillation (AF) were retrospectively reviewed. Method: The patient IR was documented with telemetry on the day of 1 - (95% CI, 1-2), 33 - (95% CI, 28-35), 105 - (95%CI, 98-156), and 275 days (95%CI, 221-380) after PMI. According to AVB grade at different examinations, AVB was defined as progressed, regressed, or unchanged. Results: In 48 dogs, 3AVB remained unchanged, whereas in 7 it regressed. Eight cases of 2AVB progressed, 3 regressed and 2 remained unchanged. Eight cases of paroxysmal 3AVB progressed and 3 remained unchanged. Four dogs affected by 2:1 2AVB progressed, 2 regressed, and 1 remained unchanged. All cases with 3AVB with AF remained unchanged. Regression occurred within 30 days after PMI, whereas progression was documented at any time. Variations in IR were associated with type of AVB (P <03) and time of follow-up (P <0001). Conclusions and clinical importance: The degree of AVB assessed at the time of PMI should not be considered definitive because more than one-third of the cases in this study either progressed or regressed. Additional studies would be necessary to elucidate possible causes for transient AVB in dogs

    Lavagnone (Desenzano del Garda) : new excavations and palaeoecology of a Bronze Age pile dwelling in northern Italy

    Get PDF
    Lavagnone is a lacustrine basin, today turned into a peat bog, which was continuously settled for about 1,000 years during the Early, Middle and Late Bronze Ages. Since 1991 research has been carried out under the supervision of R. C. de Marinis (Universit\ue0 degli Studi di Milano) in four different areas of the basin in order to reconstruct the features of the settlement and the changes that occurred over the course of time. Palynological and palaeobotanical analyses, taking place since 2002 in cooperation with CNR-IDPA (Milano), are focused on determining the palaeoenvironmental manifestation, both then and now, of the anthropogenic exploitation of the basin

    Interpretable Ranking Using LambdaMART (Abstract)

    Get PDF
    In this talk we present the main results of a short paper appearing at SIGIR 2022 [1]. Interpretable Learning to Rank (LtR) is an emerging field within the research area of explainable AI, aiming at developing intelligible and accurate predictive models. While most of the previous research efforts focus on creating post-hoc explanations, in this talk we investigate how to train effective and intrinsically-interpretable ranking models. Developing these models is particularly challenging and it also requires finding a trade-off between ranking quality and model complexity. State-of-the-art rankers, made of either large ensembles of trees or several neural layers, exploit in fact an unlimited number of feature interactions making them black boxes. Previous approaches on intrinsically-interpretable ranking models, as Neural RankGAM [2], address this issue by avoiding interactions between features thus paying a significant performance drop with respect to full-complexity models. Conversely, we propose Interpretable LambdaMART, an interpretable LtR solution based on LambdaMART that is able to train effective and intelligible models by exploiting a limited and controlled number of pairwise feature interactions. Exhaustive and reproducible experiments conducted on three publicly-available LtR datasets show that our approach outperforms the current state-of-the-art solution for interpretable ranking of a large margin with a gain of nDCG of up to 8%

    rr-process nucleosynthesis in the early Universe through fast mergers of compact binaries in triple systems

    Full text link
    Surface abundance observations of halo stars hint at the occurrence of rr-process nucleosynthesis at low metallicity ([Fe/H]<3\rm{[Fe/H]< -3}), possibly within the first 10810^8 yr after the formation of the first stars. Possible loci of early-Universe rr-process nucleosynthesis are the ejecta of either black hole--neutron star or neutron star--neutron star binary mergers. Here we study the effect of the inclination--eccentricity oscillations raised by a tertiary (e.g. a star) on the coalescence time scale of the inner compact object binaries. Our results are highly sensitive to the assumed initial distribution of the inner binary semi-major axes. Distributions with mostly wide compact object binaries are most affected by the third object, resulting in a strong increase (by more than a factor of 2) in the fraction of fast coalescences. If instead the distribution preferentially populates very close compact binaries, general relativistic precession prevents the third body from increasing the inner binary eccentricity to very high values. In this last case, the fraction of coalescing binaries is increased much less by tertiaries, but the fraction of binaries that would coalesce within 10810^8 yr even without a third object is already high. Our results provide additional support to the compact object merger scenario for rr-process nucleosynthesis.Comment: 20 pages, 9 figures, accepted for publication in PAS

    Expansion via Prediction of Importance with Contextualization

    Get PDF
    The identification of relevance with little textual context is a primary challenge in passage retrieval. We address this problem with a representation-based ranking approach that: (1) explicitly models the importance of each term using a contextualized language model; (2) performs passage expansion by propagating the importance to similar terms; and (3) grounds the representations in the lexicon, making them interpretable. Passage representations can be pre-computed at index time to reduce query-time latency. We call our approach EPIC (Expansion via Prediction of Importance with Contextualization). We show that EPIC significantly outperforms prior importance-modeling and document expansion approaches. We also observe that the performance is additive with the current leading first-stage retrieval methods, further narrowing the gap between inexpensive and cost-prohibitive passage ranking approaches. Specifically, EPIC achieves a MRR@10 of 0.304 on the MS-MARCO passage ranking dataset with 78ms average query latency on commodity hardware. We also find that the latency is further reduced to 68ms by pruning document representations, with virtually no difference in effectiveness

    Learning Early Exit Strategies for Additive Ranking Ensembles

    Get PDF
    Modern search engine ranking pipelines are commonly based on large machine-learned ensembles of regression trees. We propose LEAR, a novel - learned - technique aimed to reduce the average number of trees traversed by documents to accumulate the scores, thus reducing the overall query response time. LEAR exploits a classifier that predicts whether a document can early exit the ensemble because it is unlikely to be ranked among the final top-k results. The early exit decision occurs at a sentinel point, i.e., after having evaluated a limited number of trees, and the partial scores are exploited to filter out non-promising documents. We evaluate LEAR by deploying it in a production-like setting, adopting a state-of-the-art algorithm for ensembles traversal. We provide a comprehensive experimental evaluation on two public datasets. The experiments show that LEAR has a significant impact on the efficiency of the query processing without hindering its ranking quality. In detail, on a first dataset, LEAR is able to achieve a speedup of 3x without any loss in NDCG@10, while on a second dataset the speedup is larger than 5x with a negligible NDCG@10 loss (&lt; 0.05%)

    Efficient Document Re-Ranking for Transformers by Precomputing Term Representations

    Get PDF
    Deep pretrained transformer networks are effective at various ranking tasks, such as question answering and ad-hoc document ranking. However, their computational expenses deem them cost-prohibitive in practice. Our proposed approach, called PreTTR (Precomputing Transformer Term Representations), considerably reduces the query-time latency of deep transformer networks (up to a 42x speedup on web document ranking) making these networks more practical to use in a real-time ranking scenario. Specifically, we precompute part of the document term representations at indexing time (without a query), and merge them with the query representation at query time to compute the final ranking score. Due to the large size of the token representations, we also propose an effective approach to reduce the storage requirement by training a compression layer to match attention scores. Our compression technique reduces the storage required up to 95% and it can be applied without a substantial degradation in ranking performance

    Quality versus efficiency in document scoring with learning-to-rank models

    Get PDF
    Learning-to-Rank (LtR) techniques leverage machine learning algorithms and large amounts of training data to induce high-quality ranking functions. Given a set of docu- ments and a user query, these functions are able to precisely predict a score for each of the documents, in turn exploited to effectively rank them. Although the scoring efficiency of LtR models is critical in several applications – e.g., it directly impacts on response time and throughput of Web query processing – it has received relatively little attention so far. The goal of this work is to experimentally investigate the scoring efficiency of LtR models along with their ranking quality. Specifically, we show that machine-learned ranking mod- els exhibit a quality versus efficiency trade-off. For example, each family of LtR algorithms has tuning parameters that can influence both effectiveness and efficiency, where higher ranking quality is generally obtained with more complex and expensive models. Moreover, LtR algorithms that learn complex models, such as those based on forests of regression trees, are generally more expensive and more effective than other algorithms that induce simpler models like linear combination of features. We extensively analyze the quality versus efficiency trade-off of a wide spectrum of state- of-the-art LtR, and we propose a sound methodology to devise the most effective ranker given a time budget. To guarantee reproducibility, we used publicly available datasets and we contribute an open source C++ framework providing optimized, multi-threaded imple- mentations of the most effective tree-based learners: Gradient Boosted Regression Trees (GBRT), Lambda-Mart (λ-MART), and the first public-domain implementation of Oblivious Lambda-Mart (λ-MART), an algorithm that induces forests of oblivious regression trees. We investigate how the different training parameters impact on the quality versus effi- ciency trade-off, and provide a thorough comparison of several algorithms in the quality- cost space. The experiments conducted show that there is not an overall best algorithm, but the optimal choice depends on the time budget

    Training Curricula for Open Domain Answer Re-Ranking

    Get PDF
    In precision-oriented tasks like answer ranking, it is more important to rank many relevant answers highly than to retrieve all relevant answers. It follows that a good ranking strategy would be to learn how to identify the easiest correct answers first (i.e., assign a high ranking score to answers that have characteristics that usually indicate relevance, and a low ranking score to those with characteristics that do not), before incorporating more complex logic to handle difficult cases (e.g., semantic matching or reasoning). In this work, we apply this idea to the training of neural answer rankers using curriculum learning. We propose several heuristics to estimate the difficulty of a given training sample. We show that the proposed heuristics can be used to build a training curriculum that down-weights difficult samples early in the training process. As the training process progresses, our approach gradually shifts to weighting all samples equally, regardless of difficulty. We present a comprehensive evaluation of our proposed idea on three answer ranking datasets. Results show that our approach leads to superior performance of two leading neural ranking architectures, namely BERT and ConvKNRM, using both pointwise and pairwise losses. When applied to a BERT-based ranker, our method yields up to a 4% improvement in MRR and a 9% improvement in P@1 (compared to the model trained without a curriculum). This results in models that can achieve comparable performance to more expensive state-of-the-art techniques
    corecore