5,264 research outputs found

    Nearly Optimal Private Convolution

    Full text link
    We study computing the convolution of a private input xx with a public input hh, while satisfying the guarantees of (ϵ,δ)(\epsilon, \delta)-differential privacy. Convolution is a fundamental operation, intimately related to Fourier Transforms. In our setting, the private input may represent a time series of sensitive events or a histogram of a database of confidential personal information. Convolution then captures important primitives including linear filtering, which is an essential tool in time series analysis, and aggregation queries on projections of the data. We give a nearly optimal algorithm for computing convolutions while satisfying (ϵ,δ)(\epsilon, \delta)-differential privacy. Surprisingly, we follow the simple strategy of adding independent Laplacian noise to each Fourier coefficient and bounding the privacy loss using the composition theorem of Dwork, Rothblum, and Vadhan. We derive a closed form expression for the optimal noise to add to each Fourier coefficient using convex programming duality. Our algorithm is very efficient -- it is essentially no more computationally expensive than a Fast Fourier Transform. To prove near optimality, we use the recent discrepancy lowerbounds of Muthukrishnan and Nikolov and derive a spectral lower bound using a characterization of discrepancy in terms of determinants

    Faster truncated integer multiplication

    Full text link
    We present new algorithms for computing the low n bits or the high n bits of the product of two n-bit integers. We show that these problems may be solved in asymptotically 75% of the time required to compute the full 2n-bit product, assuming that the underlying integer multiplication algorithm relies on computing cyclic convolutions of real sequences.Comment: 28 page

    Improvements on "Fast space-variant elliptical filtering using box splines"

    Full text link
    It is well-known that box filters can be efficiently computed using pre-integrations and local finite-differences [Crow1984,Heckbert1986,Viola2001]. By generalizing this idea and by combining it with a non-standard variant of the Central Limit Theorem, a constant-time or O(1) algorithm was proposed in [Chaudhury2010] that allowed one to perform space-variant filtering using Gaussian-like kernels. The algorithm was based on the observation that both isotropic and anisotropic Gaussians could be approximated using certain bivariate splines called box splines. The attractive feature of the algorithm was that it allowed one to continuously control the shape and size (covariance) of the filter, and that it had a fixed computational cost per pixel, irrespective of the size of the filter. The algorithm, however, offered a limited control on the covariance and accuracy of the Gaussian approximation. In this work, we propose some improvements by appropriately modifying the algorithm in [Chaudhury2010].Comment: 7 figure

    Network Sketching: Exploiting Binary Structure in Deep CNNs

    Full text link
    Convolutional neural networks (CNNs) with deep architectures have substantially advanced the state-of-the-art in computer vision tasks. However, deep networks are typically resource-intensive and thus difficult to be deployed on mobile devices. Recently, CNNs with binary weights have shown compelling efficiency to the community, whereas the accuracy of such models is usually unsatisfactory in practice. In this paper, we introduce network sketching as a novel technique of pursuing binary-weight CNNs, targeting at more faithful inference and better trade-off for practical applications. Our basic idea is to exploit binary structure directly in pre-trained filter banks and produce binary-weight models via tensor expansion. The whole process can be treated as a coarse-to-fine model approximation, akin to the pencil drawing steps of outlining and shading. To further speedup the generated models, namely the sketches, we also propose an associative implementation of binary tensor convolutions. Experimental results demonstrate that a proper sketch of AlexNet (or ResNet) outperforms the existing binary-weight models by large margins on the ImageNet large scale classification task, while the committed memory for network parameters only exceeds a little.Comment: To appear in CVPR201
    • …
    corecore