3,175 research outputs found

    Approximation learning methods of Harmonic Mappings in relation to Hardy Spaces

    Full text link
    A new Hardy space Hardy space approach of Dirichlet type problem based on Tikhonov regularization and Reproducing Hilbert kernel space is discussed in this paper, which turns out to be a typical extremal problem located on the upper upper-high complex plane. If considering this in the Hardy space, the optimization operator of this problem will be highly simplified and an efficient algorithm is possible. This is mainly realized by the help of reproducing properties of the functions in the Hardy space of upper-high complex plane, and the detail algorithm is proposed. Moreover, harmonic mappings, which is a significant geometric transformation, are commonly used in many applications such as image processing, since it describes the energy minimization mappings between individual manifolds. Particularly, when we focus on the planer mappings between two Euclid planer regions, the harmonic mappings are exist and unique, which is guaranteed solidly by the existence of harmonic function. This property is attractive and simulation results are shown in this paper to ensure the capability of applications such as planer shape distortion and surface registration.Comment: 2016 3rd International Conference on Informative and Cybernetics for Computational Social Systems (ICCSS

    Approaching the Rate-Distortion Limit with Spatial Coupling, Belief propagation and Decimation

    Get PDF
    We investigate an encoding scheme for lossy compression of a binary symmetric source based on simple spatially coupled Low-Density Generator-Matrix codes. The degree of the check nodes is regular and the one of code-bits is Poisson distributed with an average depending on the compression rate. The performance of a low complexity Belief Propagation Guided Decimation algorithm is excellent. The algorithmic rate-distortion curve approaches the optimal curve of the ensemble as the width of the coupling window grows. Moreover, as the check degree grows both curves approach the ultimate Shannon rate-distortion limit. The Belief Propagation Guided Decimation encoder is based on the posterior measure of a binary symmetric test-channel. This measure can be interpreted as a random Gibbs measure at a "temperature" directly related to the "noise level of the test-channel". We investigate the links between the algorithmic performance of the Belief Propagation Guided Decimation encoder and the phase diagram of this Gibbs measure. The phase diagram is investigated thanks to the cavity method of spin glass theory which predicts a number of phase transition thresholds. In particular the dynamical and condensation "phase transition temperatures" (equivalently test-channel noise thresholds) are computed. We observe that: (i) the dynamical temperature of the spatially coupled construction saturates towards the condensation temperature; (ii) for large degrees the condensation temperature approaches the temperature (i.e. noise level) related to the information theoretic Shannon test-channel noise parameter of rate-distortion theory. This provides heuristic insight into the excellent performance of the Belief Propagation Guided Decimation algorithm. The paper contains an introduction to the cavity method

    Efficient sphere-covering and converse measure concentration via generalized coding theorems

    Full text link
    Suppose A is a finite set equipped with a probability measure P and let M be a ``mass'' function on A. We give a probabilistic characterization of the most efficient way in which A^n can be almost-covered using spheres of a fixed radius. An almost-covering is a subset C_n of A^n, such that the union of the spheres centered at the points of C_n has probability close to one with respect to the product measure P^n. An efficient covering is one with small mass M^n(C_n); n is typically large. With different choices for M and the geometry on A our results give various corollaries as special cases, including Shannon's data compression theorem, a version of Stein's lemma (in hypothesis testing), and a new converse to some measure concentration inequalities on discrete spaces. Under mild conditions, we generalize our results to abstract spaces and non-product measures.Comment: 29 pages. See also http://www.stat.purdue.edu/~yiannis

    Bounds on inference

    Get PDF
    Lower bounds for the average probability of error of estimating a hidden variable X given an observation of a correlated random variable Y, and Fano's inequality in particular, play a central role in information theory. In this paper, we present a lower bound for the average estimation error based on the marginal distribution of X and the principal inertias of the joint distribution matrix of X and Y. Furthermore, we discuss an information measure based on the sum of the largest principal inertias, called k-correlation, which generalizes maximal correlation. We show that k-correlation satisfies the Data Processing Inequality and is convex in the conditional distribution of Y given X. Finally, we investigate how to answer a fundamental question in inference and privacy: given an observation Y, can we estimate a function f(X) of the hidden random variable X with an average error below a certain threshold? We provide a general method for answering this question using an approach based on rate-distortion theory.Comment: Allerton 2013 with extended proof, 10 page
    • …
    corecore