3,968 research outputs found

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes

    Basic concepts and models in continuum damage mechanics

    Get PDF
    In this paper, we present some basic elements of macroscopic modelling of damage. We then recall the general approach of continuum damage based on the thermodynamics of irreversible processes and its application to isotropic damage modelling. The study of damage induced anisotropy is treated by considering a second order tensorial damage variable. Finally, we present an original macroscopic approach through which is addressed the question of unilateral effects due to the microcracks closure

    Improved bounds for the rate loss of multiresolution source codes

    Get PDF
    We present new bounds for the rate loss of multiresolution source codes (MRSCs). Considering an M-resolution code, the rate loss at the ith resolution with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), where R/sub i/ is the rate achievable by the MRSC at stage i. This rate loss describes the performance degradation of the MRSC compared to the best single-resolution code with the same distortion. For two-resolution source codes, there are three scenarios of particular interest: (i) when both resolutions are equally important; (ii) when the rate loss at the first resolution is 0 (L/sub 1/=0); (iii) when the rate loss at the second resolution is 0 (L/sub 2/=0). The work of Lastras and Berger (see ibid., vol.47, p.918-26, Mar. 2001) gives constant upper bounds for the rate loss of an arbitrary memoryless source in scenarios (i) and (ii) and an asymptotic bound for scenario (iii) as D/sub 2/ approaches 0. We focus on the squared error distortion measure and (a) prove that for scenario (iii) L/sub 1/<1.1610 for all D/sub 2/<0.7250; (c) tighten the Lastras-Berger bound for scenario (i) from L/sub i//spl les/1/2 to L/sub i/<0.3802, i/spl isin/{1,2}; and (d) generalize the bounds for scenarios (ii) and (iii) to M-resolution codes with M/spl ges/2. We also present upper bounds for the rate losses of additive MRSCs (AMRSCs). An AMRSC is a special MRSC where each resolution describes an incremental reproduction and the kth-resolution reconstruction equals the sum of the first k incremental reproductions. We obtain two bounds on the rate loss of AMRSCs: one primarily good for low-rate coding and another which depends on the source entropy

    On-line PCA with Optimal Regrets

    Full text link
    We carefully investigate the on-line version of PCA, where in each trial a learning algorithm plays a k-dimensional subspace, and suffers the compression loss on the next instance when projected into the chosen subspace. In this setting, we analyze two popular on-line algorithms, Gradient Descent (GD) and Exponentiated Gradient (EG). We show that both algorithms are essentially optimal in the worst-case. This comes as a surprise, since EG is known to perform sub-optimally when the instances are sparse. This different behavior of EG for PCA is mainly related to the non-negativity of the loss in this case, which makes the PCA setting qualitatively different from other settings studied in the literature. Furthermore, we show that when considering regret bounds as function of a loss budget, EG remains optimal and strictly outperforms GD. Next, we study the extension of the PCA setting, in which the Nature is allowed to play with dense instances, which are positive matrices with bounded largest eigenvalue. Again we can show that EG is optimal and strictly better than GD in this setting

    Simple, compact and robust approximate string dictionary

    Full text link
    This paper is concerned with practical implementations of approximate string dictionaries that allow edit errors. In this problem, we have as input a dictionary DD of dd strings of total length nn over an alphabet of size σ\sigma. Given a bound kk and a pattern xx of length mm, a query has to return all the strings of the dictionary which are at edit distance at most kk from xx, where the edit distance between two strings xx and yy is defined as the minimum-cost sequence of edit operations that transform xx into yy. The cost of a sequence of operations is defined as the sum of the costs of the operations involved in the sequence. In this paper, we assume that each of these operations has unit cost and consider only three operations: deletion of one character, insertion of one character and substitution of a character by another. We present a practical implementation of the data structure we recently proposed and which works only for one error. We extend the scheme to 2k<m2\leq k<m. Our implementation has many desirable properties: it has a very fast and space-efficient building algorithm. The dictionary data structure is compact and has fast and robust query time. Finally our data structure is simple to implement as it only uses basic techniques from the literature, mainly hashing (linear probing and hash signatures) and succinct data structures (bitvectors supporting rank queries).Comment: Accepted to a journal (19 pages, 2 figures

    Return times, recurrence densities and entropy for actions of some discrete amenable groups

    Full text link
    Results of Wyner and Ziv and of Ornstein and Weiss show that if one observes the first k outputs of a finite-valued ergodic process, then the waiting time until this block appears again is almost surely asymptotic to 2hk2^{hk}, where hh is the entropy of the process. We examine this phenomenon when the allowed return times are restricted to some subset of times, and generalize the results to processes parameterized by other discrete amenable groups. We also obtain a uniform density version of the waiting time results: For a process on ss symbols, within a given realization, the density of the initial kk-block within larger nn-blocks approaches 2hk2^{-hk}, uniformly in n>skn>s^k, as kk tends to infinity. Again, similar results hold for processes with other indexing groups.Comment: To appear in Journal d'Analyse Mathematiqu
    corecore