12 research outputs found

    Solving Medium-Density Subset Sum Problems in Expected Polynomial Time: An Enumeration Approach

    Full text link
    The subset sum problem (SSP) can be briefly stated as: given a target integer EE and a set AA containing nn positive integer aja_j, find a subset of AA summing to EE. The \textit{density} dd of an SSP instance is defined by the ratio of nn to mm, where mm is the logarithm of the largest integer within AA. Based on the structural and statistical properties of subset sums, we present an improved enumeration scheme for SSP, and implement it as a complete and exact algorithm (EnumPlus). The algorithm always equivalently reduces an instance to be low-density, and then solve it by enumeration. Through this approach, we show the possibility to design a sole algorithm that can efficiently solve arbitrary density instance in a uniform way. Furthermore, our algorithm has considerable performance advantage over previous algorithms. Firstly, it extends the density scope, in which SSP can be solved in expected polynomial time. Specifically, It solves SSP in expected O(nlogn)O(n\log{n}) time when density dcn/lognd \geq c\cdot \sqrt{n}/\log{n}, while the previously best density scope is dcn/(logn)2d \geq c\cdot n/(\log{n})^{2}. In addition, the overall expected time and space requirement in the average case are proven to be O(n5logn)O(n^5\log n) and O(n5)O(n^5) respectively. Secondly, in the worst case, it slightly improves the previously best time complexity of exact algorithms for SSP. Specifically, the worst-case time complexity of our algorithm is proved to be O((n6)2n/2+n)O((n-6)2^{n/2}+n), while the previously best result is O(n2n/2)O(n2^{n/2}).Comment: 11 pages, 1 figur

    Ternary Syndrome Decoding with Large Weight

    Get PDF
    The Syndrome Decoding problem is at the core of many code-based cryptosystems. In this paper, we study ternary Syndrome Decoding in large weight. This problem has been introduced in the Wave signature scheme but has never been thoroughly studied. We perform an algorithmic study of this problem which results in an update of the Wave parameters. On a more fundamental level, we show that ternary Syndrome Decoding with large weight is a really harder problem than the binary Syndrome Decoding problem, which could have several applications for the design of code-based cryptosystems

    On Near-Linear-Time Algorithms for Dense Subset Sum

    Get PDF
    In the Subset Sum problem we are given a set of nn positive integers XX and a target tt and are asked whether some subset of XX sums to tt. Natural parameters for this problem that have been studied in the literature are nn and tt as well as the maximum input number mxX\rm{mx}_X and the sum of all input numbers ΣX\Sigma_X. In this paper we study the dense case of Subset Sum, where all these parameters are polynomial in nn. In this regime, standard pseudo-polynomial algorithms solve Subset Sum in polynomial time nO(1)n^{O(1)}. Our main question is: When can dense Subset Sum be solved in near-linear time O~(n)\tilde{O}(n)? We provide an essentially complete dichotomy by designing improved algorithms and proving conditional lower bounds, thereby determining essentially all settings of the parameters n,t,mxX,ΣXn,t,\rm{mx}_X,\Sigma_X for which dense Subset Sum is in time O~(n)\tilde{O}(n). For notational convenience we assume without loss of generality that tmxXt \ge \rm{mx}_X (as larger numbers can be ignored) and tΣX/2t \le \Sigma_X/2 (using symmetry). Then our dichotomy reads as follows: - By reviving and improving an additive-combinatorics-based approach by Galil and Margalit [SICOMP'91], we show that Subset Sum is in near-linear time O~(n)\tilde{O}(n) if tmxXΣX/n2t \gg \rm{mx}_X \Sigma_X/n^2. - We prove a matching conditional lower bound: If Subset Sum is in near-linear time for any setting with tmxXΣX/n2t \ll \rm{mx}_X \Sigma_X/n^2, then the Strong Exponential Time Hypothesis and the Strong k-Sum Hypothesis fail. We also generalize our algorithm from sets to multi-sets, albeit with non-matching upper and lower bounds

    Privacy-Preserving Distributed Learning with Secret Gradient Descent

    Full text link
    In many important application domains of machine learning, data is a privacy-sensitive resource. In addition, due to the growing complexity of the models, single actors typically do not have sufficient data to train a model on their own. Motivated by these challenges, we propose Secret Gradient Descent (SecGD), a method for training machine learning models on data that is spread over different clients while preserving the privacy of the training data. We achieve this by letting each client add temporary noise to the information they send to the server during the training process. They also share this noise in separate messages with the server, which can then subtract it from the previously received values. By routing all data through an anonymization network such as Tor, we prevent the server from knowing which messages originate from the same client, which in turn allows us to show that breaking a client's privacy is computationally intractable as it would require solving a hard instance of the subset sum problem. This setup allows SecGD to work in the presence of only two honest clients and a malicious server, and without the need for peer-to-peer connections.Comment: 13 pages, 1 figur

    Approximating Knapsack and Partition via Dense Subset Sums

    Full text link
    Knapsack and Partition are two important additive problems whose fine-grained complexities in the (1ε)(1-\varepsilon)-approximation setting are not yet settled. In this work, we make progress on both problems by giving improved algorithms. - Knapsack can be (1ε)(1 - \varepsilon)-approximated in O~(n+(1/ε)2.2)\tilde O(n + (1/\varepsilon) ^ {2.2} ) time, improving the previous O~(n+(1/ε)2.25)\tilde O(n + (1/\varepsilon) ^ {2.25} ) by Jin (ICALP'19). There is a known conditional lower bound of (n+ε)2o(1)(n+\varepsilon)^{2-o(1)} based on (min,+)(\min,+)-convolution hypothesis. - Partition can be (1ε)(1 - \varepsilon)-approximated in O~(n+(1/ε)1.25)\tilde O(n + (1/\varepsilon) ^ {1.25} ) time, improving the previous O~(n+(1/ε)1.5)\tilde O(n + (1/\varepsilon) ^ {1.5} ) by Bringmann and Nakos (SODA'21). There is a known conditional lower bound of (1/ε)1o(1)(1/\varepsilon)^{1-o(1)} based on Strong Exponential Time Hypothesis. Both of our new algorithms apply the additive combinatorial results on dense subset sums by Galil and Margalit (SICOMP'91), Bringmann and Wellnitz (SODA'21). Such techniques have not been explored in the context of Knapsack prior to our work. In addition, we design several new methods to speed up the divide-and-conquer steps which naturally arise in solving additive problems.Comment: To appear in SODA 2023. Corrects minor mistakes in Lemma 3.3 and Lemma 3.5 in the proceedings version of this pape
    corecore