1,256 research outputs found

    How Different are Pre-trained Transformers for Text Ranking?

    Get PDF
    In recent years, large pre-trained transformers have led to substantial gains in performance over traditional retrieval models and feedback approaches. However, these results are primarily based on the MS Marco/TREC Deep Learning Track setup, with its very particular setup, and our understanding of why and how these models work better is fragmented at best. We analyze effective BERT-based cross-encoders versus traditional BM25 ranking for the passage retrieval task where the largest gains have been observed, and investigate two main questions. On the one hand, what is similar? To what extent does the neural ranker already encompass the capacity of traditional rankers? Is the gain in performance due to a better ranking of the same documents (prioritizing precision)? On the other hand, what is different? Can it retrieve effectively documents missed by traditional systems (prioritizing recall)? We discover substantial differences in the notion of relevance identifying strengths and weaknesses of BERT that may inspire research for future improvement. Our results contribute to our understanding of (black-box) neural rankers relative to (well-understood) traditional rankers, help understand the particular experimental setting of MS-Marco-based test collections.Comment: ECIR 202

    pMIIND-an MPI-based population density simulation framework

    Get PDF
    MIIND [1] is the first publicly available implementation of population density algorithms. Like neural mass models, they model at the population level, rather than that of individual neurons, but unlike neural mass models, they consider the full neuronal state space. The central concept is a population density, a probability distribution function that represents the probability of a neuron being in a certain part of state space. Neurons will move through state space by their own intrinsic dynamics or driven by synaptic input. When individual spikes do not matter but only population averaged quantities are considered, these methods outperform direct simulations using neuron point models by a factor 10 or more, whilst (at the population level) producing identical results to simulations of spiking neurons. This is in general not true for neural mass models. Population density methods also relate closely to analytic evaluations of population dynamics. The evolution of the density function is given by a partial differential equation (PDE). In [3] a generic method was presented for solving this equation efficiently, both for small synaptic efficacies (diffusion limit; the PDE becomes a Fokker-Planck equation) and for large ones (finite jumps). We demonstrated that for leaky-integrate-and-fire (LIF) neurons this method reproduces analytic results [1] and uses of the order of 0.2 s to model 1s simulation time of infinitely large population of spiking LIF neurons. We now have developed this method to apply to any 1D neuron point model [3], not just LIF neurons and demonstrated the technique on quadratic-integrate-and-fire neurons. We are therefore in the position to model large heterogeneous networks of spiking neurons very efficiently. A potential bottleneck is MIIND's serial simulation loop. We developed an MPI implementation of MIIND's central simulation loop starting from a fresh code base, and addressed serialization, which is now done at the level of individual cores. Central assumption in the set up is that firing rates are communicated, not individual spikes, so bandwidth requirements are low. Latency is potentially a problem, but with the use of latency hiding techniques good scalability for up to 64 cores has been achieved ondedicated clusters. The scalability was verified with a simple model of cortical waves in a hexagonal network of populations with balanced excitation-inhibition. pMIIND is available on Sourceforge, through its git repository: git://http://miind.sourceforge.net A CMake-based install procedure is provided. Since pMIIND is set up as a C++ framework, it is possible to define one's own algorithms and still take advantage of the MPI-based simulation loop

    Parsimonious language models for a terabyte of text

    Get PDF
    • …
    corecore