30,360 research outputs found

    Adaptive multiresolution schemes with local time stepping for two-dimensional degenerate reaction-diffusion systems

    Full text link
    We present a fully adaptive multiresolution scheme for spatially two-dimensional, possibly degenerate reaction-diffusion systems, focusing on combustion models and models of pattern formation and chemotaxis in mathematical biology. Solutions of these equations in these applications exhibit steep gradients, and in the degenerate case, sharp fronts and discontinuities. The multiresolution scheme is based on finite volume discretizations with explicit time stepping. The multiresolution representation of the solution is stored in a graded tree. By a thresholding procedure, namely the elimination of leaves that are smaller than a threshold value, substantial data compression and CPU time reduction is attained. The threshold value is chosen optimally, in the sense that the total error of the adaptive scheme is of the same slope as that of the reference finite volume scheme. Since chemical reactions involve a large range of temporal scales, but are spatially well localized (especially in the combustion model), a locally varying adaptive time stepping strategy is applied. It turns out that local time stepping accelerates the adaptive multiresolution method by a factor of two, while the error remains controlled.Comment: 27 pages, 14 figure

    EFFICIENT DEPTH MAP COMPRESSION EXPLOITING CORRELATION WITH TEXTURE DATA IN MULTIRESOLUTION PREDICTIVE IMAGE CODERS

    Get PDF
    International audienceNew 3D applications such as 3DTV and FVV require not only a large amount of data, but also high-quality visual rendering. Based on one or several depth maps, intermediate views can be synthesized using a depth image-based rendering technique. Many compression schemes have been proposed for texture-plus-depth data, but the exploitation of the correlation between the two representations in enhancing compression performances is still an open research issue. In this paper, we present a novel compression scheme that aims at improving the depth coding using a joint depth/texture coding scheme. This method is an extension of the LAR (Locally Adaptive Resolution) codec, initially designed for 2D images. The LAR coding framework provides a lot of functionalities such as lossy/lossless compression, low complexity, resolution and quality scalability and quality control. Experimental results address both lossless and lossy compression aspects, considering some state of the art techniques in the two domains (JPEGLS, JPEGXR). Subjective results on the intermediate view synthesis after depth map coding show that the proposed method significantly improves the visual quality

    Adaptive segmenting of non-stationary signals

    Get PDF
    Many data compression techniques rely on the low entropy and/or the large degree of autocorrelation exhibited by stationary signals. In non-stationary signals, however, these characteristics are not constant, resulting in reduced data compression efficiency. An adaptive scheme is developed that divides non-stationary signals into smaller locally stationary segments, thereby improving overall efficiency. Two principal issues arise in implementing this procedure. The first is practical; an exhaustive search of all possible segmentations is in general computationally prohibitive. The concept of dynamic programming is applied to reduce the expense of such a search. The second involves choosing a cost function that is appropriate for a particular compression method. Two cost functions are employed here, one based on entropy and the other on correlation. It is shown that by using an appropriate cost function, an adaptively segmented signal offers better data compression efficiency than an unsegmented or arbitrarily segmented signal

    A multiresolution space-time adaptive scheme for the bidomain model in electrocardiology

    Get PDF
    This work deals with the numerical solution of the monodomain and bidomain models of electrical activity of myocardial tissue. The bidomain model is a system consisting of a possibly degenerate parabolic PDE coupled with an elliptic PDE for the transmembrane and extracellular potentials, respectively. This system of two scalar PDEs is supplemented by a time-dependent ODE modeling the evolution of the so-called gating variable. In the simpler sub-case of the monodomain model, the elliptic PDE reduces to an algebraic equation. Two simple models for the membrane and ionic currents are considered, the Mitchell-Schaeffer model and the simpler FitzHugh-Nagumo model. Since typical solutions of the bidomain and monodomain models exhibit wavefronts with steep gradients, we propose a finite volume scheme enriched by a fully adaptive multiresolution method, whose basic purpose is to concentrate computational effort on zones of strong variation of the solution. Time adaptivity is achieved by two alternative devices, namely locally varying time stepping and a Runge-Kutta-Fehlberg-type adaptive time integration. A series of numerical examples demonstrates thatthese methods are efficient and sufficiently accurate to simulate the electrical activity in myocardial tissue with affordable effort. In addition, an optimalthreshold for discarding non-significant information in the multiresolution representation of the solution is derived, and the numerical efficiency and accuracy of the method is measured in terms of CPU time speed-up, memory compression, and errors in different norms.Comment: 25 pages, 41 figure

    Compressing Sparse Sequences under Local Decodability Constraints

    Full text link
    We consider a variable-length source coding problem subject to local decodability constraints. In particular, we investigate the blocklength scaling behavior attainable by encodings of rr-sparse binary sequences, under the constraint that any source bit can be correctly decoded upon probing at most dd codeword bits. We consider both adaptive and non-adaptive access models, and derive upper and lower bounds that often coincide up to constant factors. Notably, such a characterization for the fixed-blocklength analog of our problem remains unknown, despite considerable research over the last three decades. Connections to communication complexity are also briefly discussed.Comment: 8 pages, 1 figure. First five pages to appear in 2015 International Symposium on Information Theory. This version contains supplementary materia

    Adaptive Wavelet Collocation Method for Simulation of Time Dependent Maxwell's Equations

    Get PDF
    This paper investigates an adaptive wavelet collocation time domain method for the numerical solution of Maxwell's equations. In this method a computational grid is dynamically adapted at each time step by using the wavelet decomposition of the field at that time instant. In the regions where the fields are highly localized, the method assigns more grid points; and in the regions where the fields are sparse, there will be less grid points. On the adapted grid, update schemes with high spatial order and explicit time stepping are formulated. The method has high compression rate, which substantially reduces the computational cost allowing efficient use of computational resources. This adaptive wavelet collocation method is especially suitable for simulation of guided-wave optical devices
    • …
    corecore