237,815 research outputs found

    On optimal and near-optimal turbo decoding using generalized max operator

    Get PDF
    Motivated by a recently published robust geometric programming approximation, a generalized approach for approximating efficiently the max* operator is presented. Using this approach, the max* operator is approximated by means of a generic and yet very simple max operator, instead of using additional correction term as previous approximation methods require. Following that, several turbo decoding algorithms are obtained with optimal and near-optimal bit error rate (BER) performance depending on a single parameter, namely the number of piecewise linear (PWL) approximation terms. It turns out that the known max-log-MAP algorithm can be viewed as special case of this new generalized approach. Furthermore, the decoding complexity of the most popular previously published methods is estimated, for the first time, in a unified way by hardware synthesis results, showing the practical implementation advantages of the proposed approximations against these method

    Stein-MAP: A Sequential Variational Inference Framework for Maximum A Posteriori Estimation

    Full text link
    State estimation poses substantial challenges in robotics, often involving encounters with multimodality in real-world scenarios. To address these challenges, it is essential to calculate Maximum a posteriori (MAP) sequences from joint probability distributions of latent states and observations over time. However, it generally involves a trade-off between approximation errors and computational complexity. In this article, we propose a new method for MAP sequence estimation called Stein-MAP, which effectively manages multimodality with fewer approximation errors while significantly reducing computational and memory burdens. Our key contribution lies in the introduction of a sequential variational inference framework designed to handle temporal dependencies among transition states within dynamical system models. The framework integrates Stein's identity from probability theory and reproducing kernel Hilbert space (RKHS) theory, enabling computationally efficient MAP sequence estimation. As a MAP sequence estimator, Stein-MAP boasts a computational complexity of O(N), where N is the number of particles, in contrast to the O(N^2) complexity of the Viterbi algorithm. The proposed method is empirically validated through real-world experiments focused on range-only (wireless) localization. The results demonstrate a substantial enhancement in state estimation compared to existing methods. A remarkable feature of Stein-MAP is that it can attain improved state estimation with only 40 to 50 particles, as opposed to the 1000 particles that the particle filter or its variants require.Comment: 13 page

    Approximation of high-dimensional parametric PDEs

    Get PDF
    Parametrized families of PDEs arise in various contexts such as inverse problems, control and optimization, risk assessment, and uncertainty quantification. In most of these applications, the number of parameters is large or perhaps even infinite. Thus, the development of numerical methods for these parametric problems is faced with the possible curse of dimensionality. This article is directed at (i) identifying and understanding which properties of parametric equations allow one to avoid this curse and (ii) developing and analyzing effective numerical methodd which fully exploit these properties and, in turn, are immune to the growth in dimensionality. The first part of this article studies the smoothness and approximability of the solution map, that is, the map au(a)a\mapsto u(a) where aa is the parameter value and u(a)u(a) is the corresponding solution to the PDE. It is shown that for many relevant parametric PDEs, the parametric smoothness of this map is typically holomorphic and also highly anisotropic in that the relevant parameters are of widely varying importance in describing the solution. These two properties are then exploited to establish convergence rates of nn-term approximations to the solution map for which each term is separable in the parametric and physical variables. These results reveal that, at least on a theoretical level, the solution map can be well approximated by discretizations of moderate complexity, thereby showing how the curse of dimensionality is broken. This theoretical analysis is carried out through concepts of approximation theory such as best nn-term approximation, sparsity, and nn-widths. These notions determine a priori the best possible performance of numerical methods and thus serve as a benchmark for concrete algorithms. The second part of this article turns to the development of numerical algorithms based on the theoretically established sparse separable approximations. The numerical methods studied fall into two general categories. The first uses polynomial expansions in terms of the parameters to approximate the solution map. The second one searches for suitable low dimensional spaces for simultaneously approximating all members of the parametric family. The numerical implementation of these approaches is carried out through adaptive and greedy algorithms. An a priori analysis of the performance of these algorithms establishes how well they meet the theoretical benchmarks

    Embedding based on function approximation for large scale image search

    Full text link
    The objective of this paper is to design an embedding method that maps local features describing an image (e.g. SIFT) to a higher dimensional representation useful for the image retrieval problem. First, motivated by the relationship between the linear approximation of a nonlinear function in high dimensional space and the stateof-the-art feature representation used in image retrieval, i.e., VLAD, we propose a new approach for the approximation. The embedded vectors resulted by the function approximation process are then aggregated to form a single representation for image retrieval. Second, in order to make the proposed embedding method applicable to large scale problem, we further derive its fast version in which the embedded vectors can be efficiently computed, i.e., in the closed-form. We compare the proposed embedding methods with the state of the art in the context of image search under various settings: when the images are represented by medium length vectors, short vectors, or binary vectors. The experimental results show that the proposed embedding methods outperform existing the state of the art on the standard public image retrieval benchmarks.Comment: Accepted to TPAMI 2017. The implementation and precomputed features of the proposed F-FAemb are released at the following link: http://tinyurl.com/F-FAem

    Efficient SDP Inference for Fully-connected CRFs Based on Low-rank Decomposition

    Full text link
    Conditional Random Fields (CRF) have been widely used in a variety of computer vision tasks. Conventional CRFs typically define edges on neighboring image pixels, resulting in a sparse graph such that efficient inference can be performed. However, these CRFs fail to model long-range contextual relationships. Fully-connected CRFs have thus been proposed. While there are efficient approximate inference methods for such CRFs, usually they are sensitive to initialization and make strong assumptions. In this work, we develop an efficient, yet general algorithm for inference on fully-connected CRFs. The algorithm is based on a scalable SDP algorithm and the low- rank approximation of the similarity/kernel matrix. The core of the proposed algorithm is a tailored quasi-Newton method that takes advantage of the low-rank matrix approximation when solving the specialized SDP dual problem. Experiments demonstrate that our method can be applied on fully-connected CRFs that cannot be solved previously, such as pixel-level image co-segmentation.Comment: 15 pages. A conference version of this work appears in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 201
    corecore