1,610 research outputs found

    Adaptive lattice bilinear filters

    Get PDF
    Journal ArticleAbstract-This paper presents two fast least squares lattice algorithms for adaptive nonlinear filters equipped with bilinear system models. Bilinear models are attractive for adaptive filtering applications because they can approximate a large class of nonlinear systems adequately, and usually with considerable parsimony in the number of coefficients required. The lattice filter formulation transforms the nonlinear filtering problem into an equivalent multichannel linear filtering problem and then uses multichannel lattice filtering algorithms to solve the nonlinear filtering problem. The lattice filters perform a Gram-Schmidt orthogonalization of the input data and have very good numerical properties. Furthermore, the computational complexity of the algorithms is an order of magnitude smaller than previously available methods. The first of the two approaches is an equation error algorithm that uses the measured desired response signal directly to compute the adaptive filter outputs. This method is conceptually very simple; however, it will result in biased system models in the presence of measurement noise. The second approach is an approximate least squares output error solution. In this case, the past samples of the output of the adaptive system itself are used to produce the filter output at the current time. Results of several experiments that demonstrate and compare the properties of the adaptive bilinear filters are also presented in this paper. These results indicate that the output error algorithm is less sensitive to output measurement noise than the equation error method

    Adaptive algorithms for identifying recursive nonlinear systems

    Get PDF
    Journal ArticleABSTRACT This paper presents two fast least-squares lattice algorithms for adaptive non-linear filters equipped with system models involving nonlinear feedback. Such models can approximate a large class of non-linear systems adequately, and usually with considerable parsimony in the number of coefficients required. For simplicity of presentation, we consider the bilinear system model in the paper, even though the results are applicable to more general system models. The computational complexity of the algorithms is an order of magnitude smaller than previously available methods. Results of several experiments that demonstrate the properties of the adaptive bilinear filters as well as compare their performances with two other algorithms that are computationally more expensive are also presented in this paper

    Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks

    Full text link
    Bilateral filters have wide spread use due to their edge-preserving properties. The common use case is to manually choose a parametric filter type, usually a Gaussian filter. In this paper, we will generalize the parametrization and in particular derive a gradient descent algorithm so the filter parameters can be learned from data. This derivation allows to learn high dimensional linear filters that operate in sparsely populated feature spaces. We build on the permutohedral lattice construction for efficient filtering. The ability to learn more general forms of high-dimensional filters can be used in several diverse applications. First, we demonstrate the use in applications where single filter applications are desired for runtime reasons. Further, we show how this algorithm can be used to learn the pairwise potentials in densely connected conditional random fields and apply these to different image segmentation tasks. Finally, we introduce layers of bilateral filters in CNNs and propose bilateral neural networks for the use of high-dimensional sparse data. This view provides new ways to encode model structure into network architectures. A diverse set of experiments empirically validates the usage of general forms of filters

    Superpixel Convolutional Networks using Bilateral Inceptions

    Full text link
    In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new 'bilateral inception' module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception module between the last CNN (1x1 convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time.Comment: European Conference on Computer Vision (ECCV), 201

    Adaptive polynomial filters

    Get PDF
    Journal ArticleWhile linear filter are useful in a large number of applications and relatively simple from conceptual and implementational view points. there are many practical situations that require nonlinear processing of the signals involved. This article explains adaptive nonlinear filters equipped with polynomial models of nonlinearity. The polynomial systems considered are those nonlinear systems whose output signals can be related to the input signals through a truncated Volterra series expansion, or a recursive nonlinear difference equation. The Volterra series expansion can model a large class of nonlinear systems and is attractive in filtering applications because the expansion is a linear combination of nonlinear functions of the input signal. The basic ideas behind the development of gradient and recursive least-squares adaptive Volterra filters are first discussed. followed by adaptive algorithms using system models involving recursive nonlinear difference equations. Such systems are attractive because they may be able to approximate many nonlinear systems with great parsimony in the use pf coefficients. Also discussed are current research trends and new results and problem areas associated with these nonlinear filters. A lattice structure for polynomial models is also described

    Volterra and general polynomial related filtering

    Get PDF
    Journal ArticleThis paper presents a review of polynomial filtering and, in particular, of tlie truncated Volterra filters. Following the introduction of the general properties of such filters, issues such as eficieiit realizations, design, adaptive algoritlims and stability are discussed

    The discrete-time bounded-real lemma in digital filtering

    Get PDF
    The Lossless Bounded-Real lemma is developed in the discrete-time domain, based only on energy balance arguments. The results are used to prove a discrete-time version of the general Bounded-Real lemma, based on a matrix spectral-factorization result that permits a transfer matrix embedding process. Some applications of the results in digital filter theory are finally outlined

    Learning Task-Specific Generalized Convolutions in the Permutohedral Lattice

    Full text link
    Dense prediction tasks typically employ encoder-decoder architectures, but the prevalent convolutions in the decoder are not image-adaptive and can lead to boundary artifacts. Different generalized convolution operations have been introduced to counteract this. We go beyond these by leveraging guidance data to redefine their inherent notion of proximity. Our proposed network layer builds on the permutohedral lattice, which performs sparse convolutions in a high-dimensional space allowing for powerful non-local operations despite small filters. Multiple features with different characteristics span this permutohedral space. In contrast to prior work, we learn these features in a task-specific manner by generalizing the basic permutohedral operations to learnt feature representations. As the resulting objective is complex, a carefully designed framework and learning procedure are introduced, yielding rich feature embeddings in practice. We demonstrate the general applicability of our approach in different joint upsampling tasks. When adding our network layer to state-of-the-art networks for optical flow and semantic segmentation, boundary artifacts are removed and the accuracy is improved.Comment: To appear at GCPR 201
    • …
    corecore