67,462 research outputs found

    Weak* Properties of Weighted Convolution Algebras

    No full text
    Suppose that L1(ω) is a weighted convolution algebra on R+ = [0,∞) with the weight ω(t) normalized so that the corresponding space M(ω) of measures is the dual space of the space C0(1/ω) of continuous functions. Suppose that φ : L1(ω) → L1(ω0 ) is a continuous nonzero homomorphism, where L1(ω0 ) is also a convolution algebra. If L1(ω)∗f is norm dense in L1(ω), we show that L1(ω0 ) ∗ φ(f) is (relatively) weak∗ dense in L1(ω0 ), and we identify the norm closure of L1(ω0 ) ∗ φ(f) with the convergence set for a particular semigroup. When φ is weak∗ continuous it is enough for L1(ω) ∗ f to be weak∗ dense in L1(ω). We also give sufficient conditions and characterizations of weak∗ continuity of φ. In addition, we show that, for all nonzero f in L1(ω), the sequence fn/||fn|| converges weak∗ to 0. When ω is regulated, fn+1/||fn|| converges to 0 in norm

    Confidence Propagation through CNNs for Guided Sparse Depth Regression

    Full text link
    Generally, convolutional neural networks (CNNs) process data on a regular grid, e.g. data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open research problem with numerous applications in autonomous driving, robotics, and surveillance. In this paper, we propose an algebraically-constrained normalized convolution layer for CNNs with highly sparse input that has a smaller number of network parameters compared to related work. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. We also propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. To integrate structural information, we also investigate fusion strategies to combine depth and RGB information in our normalized convolution network framework. In addition, we introduce the use of output confidence as an auxiliary information to improve the results. The capabilities of our normalized convolution network framework are demonstrated for the problem of scene depth completion. Comprehensive experiments are performed on the KITTI-Depth and the NYU-Depth-v2 datasets. The results clearly demonstrate that the proposed approach achieves superior performance while requiring only about 1-5% of the number of parameters compared to the state-of-the-art methods.Comment: 14 pages, 14 Figure

    Propagating Confidences through CNNs for Sparse Data Regression

    Full text link
    In most computer vision applications, convolutional neural networks (CNNs) operate on dense image data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open problem with numerous applications in autonomous driving, robotics, and surveillance. To tackle this challenging problem, we introduce an algebraically-constrained convolution layer for CNNs with sparse input and demonstrate its capabilities for the scene depth completion task. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. Furthermore, we propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. Comprehensive experiments are performed on the KITTI depth benchmark and the results clearly demonstrate that the proposed approach achieves superior performance while requiring three times fewer parameters than the state-of-the-art methods. Moreover, our approach produces a continuous pixel-wise confidence map enabling information fusion, state inference, and decision support.Comment: To appear in the British Machine Vision Conference (BMVC2018

    On powers of Stieltjes moment sequences, II

    Full text link
    We consider the set of Stieltjes moment sequences, for which every positive power is again a Stieltjes moment sequence, we and prove an integral representation of the logarithm of the moment sequence in analogy to the L\'evy-Khinchin representation. We use the result to construct product convolution semigroups with moments of all orders and to calculate their Mellin transforms. As an application we construct a positive generating function for the orthonormal Hermite polynomials.Comment: preprint, 21 page

    Shannon Multiresolution Analysis on the Heisenberg Group

    Get PDF
    We present a notion of frame multiresolution analysis on the Heisenberg group, abbreviated by FMRA, and study its properties. Using the irreducible representations of this group, we shall define a sinc-type function which is our starting point for obtaining the scaling function. Further, we shall give a concrete example of a wavelet FMRA on the Heisenberg group which is analogous to the Shannon MRA on \RR.Comment: 17 page

    A multivariate version of the disk convolution

    Get PDF
    We present an explicit product formula for the spherical functions of the compact Gelfand pairs (G,K1)=(SU(p+q),SU(p)×SU(q))(G,K_1)= (SU(p+q), SU(p)\times SU(q)) with p≥2qp\ge 2q, which can be considered as the elementary spherical functions of one-dimensional KK-type for the Hermitian symmetric spaces G/KG/K with K=S(U(p)×U(q))K= S(U(p)\times U(q)). Due to results of Heckman, they can be expressed in terms of Heckman-Opdam Jacobi polynomials of type BCqBC_q with specific half-integer multiplicities. By analytic continuation with respect to the multiplicity parameters we obtain positive product formulas for the extensions of these spherical functions as well as associated compact and commutative hypergroup structures parametrized by real p∈]2q−1,∞[p\in]2q-1,\infty[. We also obtain explicit product formulas for the involved continuous two-parameter family of Heckman-Opdam Jacobi polynomials with regular, but not necessarily positive multiplicities. The results of this paper extend well known results for the disk convolutions for q=1q=1 to higher rank
    • …
    corecore