2,491 research outputs found

    Compression of Deep Neural Networks on the Fly

    Full text link
    Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve these results, they use millions of parameters to be trained. However, when targeting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods

    Optimal Compression of Floating-point Astronomical Images Without Significant Loss of Information

    Get PDF
    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 -- 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.Comment: Accepted PAS

    Anisotropic Particles Strengthen Granular Pillars under Compression

    Full text link
    We probe the effects of particle shape on the global and local behavior of a two-dimensional granular pillar, acting as a proxy for a disordered solid, under uniaxial compression. This geometry allows for direct measurement of global material response, as well as tracking of all individual particle trajectories. In general, drawing connections between local structure and local dynamics can be challenging in amorphous materials due to lower precision of atomic positions, so this study aims to elucidate such connections. We vary local interactions by using three different particle shapes: discrete circular grains (monomers), pairs of grains bonded together (dimers), and groups of three bonded in a triangle (trimers). We find that dimers substantially strengthen the pillar and the degree of this effect is determined by orientational order in the initial condition. In addition, while the three particle shapes form void regions at distinct rates, we find that anisotropies in the local amorphous structure remain robust through the definition of a metric that quantifies packing anisotropy. Finally, we highlight connections between local deformation rates and local structure.Comment: 15 pages, 15 figure

    Compact Hash Codes for Efficient Visual Descriptors Retrieval in Large Scale Databases

    Get PDF
    In this paper we present an efficient method for visual descriptors retrieval based on compact hash codes computed using a multiple k-means assignment. The method has been applied to the problem of approximate nearest neighbor (ANN) search of local and global visual content descriptors, and it has been tested on different datasets: three large scale public datasets of up to one billion descriptors (BIGANN) and, supported by recent progress in convolutional neural networks (CNNs), also on the CIFAR-10 and MNIST datasets. Experimental results show that, despite its simplicity, the proposed method obtains a very high performance that makes it superior to more complex state-of-the-art methods
    • …
    corecore