143 research outputs found

    Half-tapering strategy for conditional simulation with large datasets

    Full text link
    Gaussian conditional realizations are routinely used for risk assessment and planning in a variety of Earth sciences applications. Conditional realizations can be obtained by first creating unconditional realizations that are then post-conditioned by kriging. Many efficient algorithms are available for the first step, so the bottleneck resides in the second step. Instead of doing the conditional simulations with the desired covariance (F approach) or with a tapered covariance (T approach), we propose to use the taper covariance only in the conditioning step (Half-Taper or HT approach). This enables to speed up the computations and to reduce memory requirements for the conditioning step but also to keep the right short scale variations in the realizations. A criterion based on mean square error of the simulation is derived to help anticipate the similarity of HT to F. Moreover, an index is used to predict the sparsity of the kriging matrix for the conditioning step. Some guides for the choice of the taper function are discussed. The distributions of a series of 1D, 2D and 3D scalar response functions are compared for F, T and HT approaches. The distributions obtained indicate a much better similarity to F with HT than with T.Comment: 39 pages, 2 Tables and 11 Figure

    Progressive transmission and display of static images

    Get PDF
    Progressive image transmission has been studied for some time in association with image displays connected to remote image sources, by communications channels of insufficient data rate to give subjectively near instantaneous transmission. Part of the work presented in this thesis addresses the progressive transmission problem constrained that the final displayed image is exactly identical to the source image with no redundant data transmitted. The remainder of the work presented is concerned with producing the subjectively best image for display from the information transmitted throughout the progression. Quad-tree and binary-tree based progressive transmission techniques are reviewed, especially an exactly invertible table based binary-tree technique. An algorithm is presented that replaces the table look-up in this technique, typically reducing implementation cost, and results are presented for the subjective improvement using interpolation of the display images. The relevance of the interpolation technique to focusing the progressive sequence on some part of the image is also discussed. Some aspects of transform coding for progressive transmission are reviewed, intermediate image resolution and most importantly problems associated with the coding being exactly invertible. Starting with the two-dimensional case, an algorithm is developed, that judged by the progressive display image can mimic the behaviour of a linear transform while also being exactly invertible (no quantisation). This leads to a mean/difference transform similar to the binary-tree technique. The mimic algorithm is developed to operate on n-dimensions and used to mimic an eight-dimensional cosine transform. Photographic and numerical results of the application of this algorithm to image data are presented. An area transform, interpolation to disguise block boundaries and bit allocation to coefficients, based on the cosine mimic transform are developed and results presented

    The 1993 Space and Earth Science Data Compression Workshop

    Get PDF
    The Earth Observing System Data and Information System (EOSDIS) is described in terms of its data volume, data rate, and data distribution requirements. Opportunities for data compression in EOSDIS are discussed

    Increasing Accuracy Performance through Optimal Feature Extraction Algorithms

    Get PDF
    This research developed models and techniques to improve the three key modules of popular recognition systems: preprocessing, feature extraction, and classification. Improvements were made in four key areas: processing speed, algorithm complexity, storage space, and accuracy. The focus was on the application areas of the face, traffic sign, and speaker recognition. In the preprocessing module of facial and traffic sign recognition, improvements were made through the utilization of grayscaling and anisotropic diffusion. In the feature extraction module, improvements were made in two different ways; first, through the use of mixed transforms and second through a convolutional neural network (CNN) that best fits specific datasets. The mixed transform system consists of various combinations of the Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), which have a reliable track record for image feature extraction. In terms of the proposed CNN, a neuroevolution system was used to determine the characteristics and layout of a CNN to best extract image features for particular datasets. In the speaker recognition system, the improvement to the feature extraction module comprised of a quantized spectral covariance matrix and a two-dimensional Principal Component Analysis (2DPCA) function. In the classification module, enhancements were made in visual recognition through the use of two neural networks: the multilayer sigmoid and convolutional neural network. Results show that the proposed improvements in the three modules led to an increase in accuracy as well as reduced algorithmic complexity, with corresponding reductions in storage space and processing time

    Scaling Multidimensional Inference for Big Structured Data

    Get PDF
    In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable open the door to new research questions
    corecore