6 research outputs found

    Acceleration of ListNet for ranking using reconfigurable architecture

    Get PDF
    Document ranking is used to order query results by relevance with ranking models. ListNet is a well-known ranking approach for constructing and training learning-to-rank models. Compared with traditional learning approaches, ListNet delivers better accuracy, but is computationally too expensive to learn models with large data sets due to the large number of permutations and documents involved in computing the gradients. Currently, the long training time limits the practicality of ListNet in ranking applications such as breaking news search and stock prediction, and this situation is getting worse with the increase in data-set size. In order to tackle the challenge of long training time, this thesis optimises the ListNet algorithm, and designs hardware accelerators for learning the ListNet algorithm using Field Programmable Gate Arrays (FPGAs), making the algorithm more practical for real-world application. The contributions of this thesis include: 1) A novel computation method of the ListNet algorithm for ranking. The proposed computation method exposes more fine-grained parallelism for FPGA implementation. 2) A weighted sampling method that takes into account the ranking positions, along with an effective quantisation method based on FPGA devices. The proposed design achieves a 4.42x improvement over GPU implementation speed, while still guaranteeing the accuracy. 3) A full reconfigurable architecture for the ListNet training using multiple bitstream kernels. The proposed method achieves a higher model accuracy than pure fixed point training, and a better throughput than pure floating point training. This thesis has resulted in the acceleration of the ListNet algorithm for ranking using FPGAs by applying the above techniques. Significant improvements in speed have been achieved in this work against CPU and GPU implementations.Open Acces

    Stochastic Top-k ListNet

    No full text
    ListNet is a well-known listwise learning to rank model and has gained much atten-tion in recent years. A particular problem of ListNet, however, is the high computa-tion complexity in model training, main-ly due to the large number of object per-mutations involved in computing the gra-dients. This paper proposes a stochastic ListNet approach which computes the gra-dient within a bounded permutation sub-set. It significantly reduces the computa-tion complexity of model training and al-lows extension to Top-k models, which is impossible with the conventional imple-mentation based on full-set permutation-s. Meanwhile, the new approach utilizes partial ranking information of human la-bels, which helps improve model quality. Our experiments demonstrated that the s-tochastic ListNet method indeed leads to better ranking performance and speeds up the model training remarkably.
    corecore