A sample decreasing threshold greedy‑based algorithm for big data summarisation

Abstract

As the scale of datasets used for big data applications expands rapidly, there have been increased efforts to develop faster algorithms. This paper addresses big data summarisation problems using the submodular maximisation approach and proposes an efficient algorithm for maximising general non-negative submodular objective functions subject to k-extendible system constraints. Leveraging a random sampling process and a decreasing threshold strategy, this work proposes an algorithm, named Sample Decreasing Threshold Greedy (SDTG). The proposed algorithm obtains an expected approximation guarantee of 11+k−ϵ for maximising monotone submodular functions and of k(1+k)2−ϵ in non-monotone cases with expected computational complexity of O(n(1+k)ϵlnrϵ). Here, r is the largest size of feasible solutions, and ϵ∈(0,11+k) is an adjustable designing parameter for the trade-off between the approximation ratio and the computational complexity. The performance of the proposed algorithm is validated and compared with that of benchmark algorithms through experiments with a movie recommendation system based on a real database

    Similar works