796 research outputs found

    Fast Computational Algorithms for the Discrete Wavelet Transform and Applications of Localized Orthonormal Bases in Signal Classification

    Full text link
    We construct an algorithm for implementing the discrete wavelet transform by means of matrices in SO_2(R) for orthonormal compactly supported wavelets and matrices in SL_m(R), m > = 2, for compactly supported biorthogonal wavelets. We show that in 1 dimension the total operation count using this algorithm can be reduced to about 50% of the conventional convolution and downsampling by 2-operation for both orthonormal and biorthogonal filters. In the special case of biorthogonal symmetric odd-odd filters, we show an implementation yielding a total operation count of about 38% of the conventional method. In 2 dimensions we show an implementation of this algorithm yielding a reduction in the total operation count of about 70% when the filters are orthonormal, a reduction of about 62% for general biorthogonal filters, and a reduction of about 70% if the filters are symmetric odd-odd length filters. We further extend these results to 3 dimensions. We also show how the SO_2(R)-method for implementing the discrete wavelet transform may be exploited to compute short FIR filters, and we construct edge mappings where we try to improve upon the degree of preservation of regularity in the conventional methods. We also consider a two-class waveform discrimination problem. A statistical space-frequency analysis is performed on a training data set using the LDB-algorithm of N.Saito and R.Coifman. The success of the algorithm on this particular problem is evaluated on a disjoint test data set.Comment: 127 pages, 25 figures, LaTeX2

    Geometric deep learning: going beyond Euclidean data

    Get PDF
    Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field
    • …
    corecore