12,252 research outputs found

    Two new results about quantum exact learning

    Get PDF
    We present two new results about exact learning by quantum computers. First, we show how to exactly learn a kk-Fourier-sparse nn-bit Boolean function from O(k1.5(logk)2)O(k^{1.5}(\log k)^2) uniform quantum examples for that function. This improves over the bound of Θ~(kn)\widetilde{\Theta}(kn) uniformly random classical examples (Haviv and Regev, CCC'15). Our main tool is an improvement of Chang's lemma for the special case of sparse functions. Second, we show that if a concept class C\mathcal{C} can be exactly learned using QQ quantum membership queries, then it can also be learned using O(Q2logQlogC)O\left(\frac{Q^2}{\log Q}\log|\mathcal{C}|\right) classical membership queries. This improves the previous-best simulation result (Servedio and Gortler, SICOMP'04) by a logQ\log Q-factor.Comment: v3: 21 pages. Small corrections and clarification

    Computational linear algebra over finite fields

    Get PDF
    We present here algorithms for efficient computation of linear algebra problems over finite fields

    Sparse Learning over Infinite Subgraph Features

    Full text link
    We present a supervised-learning algorithm from graph data (a set of graphs) for arbitrary twice-differentiable loss functions and sparse linear models over all possible subgraph features. To date, it has been shown that under all possible subgraph features, several types of sparse learning, such as Adaboost, LPBoost, LARS/LASSO, and sparse PLS regression, can be performed. Particularly emphasis is placed on simultaneous learning of relevant features from an infinite set of candidates. We first generalize techniques used in all these preceding studies to derive an unifying bounding technique for arbitrary separable functions. We then carefully use this bounding to make block coordinate gradient descent feasible over infinite subgraph features, resulting in a fast converging algorithm that can solve a wider class of sparse learning problems over graph data. We also empirically study the differences from the existing approaches in convergence property, selected subgraph features, and search-space sizes. We further discuss several unnoticed issues in sparse learning over all possible subgraph features.Comment: 42 pages, 24 figures, 4 table

    Finite Boolean Algebras for Solid Geometry using Julia's Sparse Arrays

    Full text link
    The goal of this paper is to introduce a new method in computer-aided geometry of solid modeling. We put forth a novel algebraic technique to evaluate any variadic expression between polyhedral d-solids (d = 2, 3) with regularized operators of union, intersection, and difference, i.e., any CSG tree. The result is obtained in three steps: first, by computing an independent set of generators for the d-space partition induced by the input; then, by reducing the solid expression to an equivalent logical formula between Boolean terms made by zeros and ones; and, finally, by evaluating this expression using bitwise operators. This method is implemented in Julia using sparse arrays. The computational evaluation of every possible solid expression, usually denoted as CSG (Constructive Solid Geometry), is reduced to an equivalent logical expression of a finite set algebra over the cells of a space partition, and solved by native bitwise operators.Comment: revised version submitted to Computer-Aided Geometric Desig

    On the Complexity and Approximation of Binary Evidence in Lifted Inference

    Full text link
    Lifted inference algorithms exploit symmetries in probabilistic models to speed up inference. They show impressive performance when calculating unconditional probabilities in relational models, but often resort to non-lifted inference when computing conditional probabilities. The reason is that conditioning on evidence breaks many of the model's symmetries, which can preempt standard lifting techniques. Recent theoretical results show, for example, that conditioning on evidence which corresponds to binary relations is #P-hard, suggesting that no lifting is to be expected in the worst case. In this paper, we balance this negative result by identifying the Boolean rank of the evidence as a key parameter for characterizing the complexity of conditioning in lifted inference. In particular, we show that conditioning on binary evidence with bounded Boolean rank is efficient. This opens up the possibility of approximating evidence by a low-rank Boolean matrix factorization, which we investigate both theoretically and empirically.Comment: To appear in Advances in Neural Information Processing Systems 26 (NIPS), Lake Tahoe, USA, December 201
    corecore