143 research outputs found
Half-tapering strategy for conditional simulation with large datasets
Gaussian conditional realizations are routinely used for risk assessment and
planning in a variety of Earth sciences applications. Conditional realizations
can be obtained by first creating unconditional realizations that are then
post-conditioned by kriging. Many efficient algorithms are available for the
first step, so the bottleneck resides in the second step. Instead of doing the
conditional simulations with the desired covariance (F approach) or with a
tapered covariance (T approach), we propose to use the taper covariance only in
the conditioning step (Half-Taper or HT approach). This enables to speed up the
computations and to reduce memory requirements for the conditioning step but
also to keep the right short scale variations in the realizations. A criterion
based on mean square error of the simulation is derived to help anticipate the
similarity of HT to F. Moreover, an index is used to predict the sparsity of
the kriging matrix for the conditioning step. Some guides for the choice of the
taper function are discussed. The distributions of a series of 1D, 2D and 3D
scalar response functions are compared for F, T and HT approaches. The
distributions obtained indicate a much better similarity to F with HT than with
T.Comment: 39 pages, 2 Tables and 11 Figure
Progressive transmission and display of static images
Progressive image transmission has been studied for some time in association with image displays connected to remote image sources, by communications channels of insufficient data rate to give subjectively near instantaneous transmission. Part of the work presented in this thesis addresses the progressive transmission problem constrained that the final displayed image is exactly identical to the source image with no redundant data transmitted. The remainder of the work presented is concerned with producing the subjectively best image for display from the information transmitted throughout the progression. Quad-tree and binary-tree based progressive transmission techniques are reviewed, especially an exactly invertible table based binary-tree technique. An algorithm is presented that replaces the table look-up in this technique, typically reducing implementation cost, and results are presented for the subjective improvement using interpolation of the display images. The relevance of the interpolation technique to focusing the progressive sequence on some part of the image is also discussed.
Some aspects of transform coding for progressive transmission are reviewed, intermediate image resolution and most importantly problems associated with the coding being exactly invertible. Starting with the two-dimensional case, an algorithm is developed, that judged by the progressive display image can mimic the behaviour of a linear transform while also being exactly invertible (no quantisation). This leads to a mean/difference transform similar to the binary-tree technique. The mimic algorithm is developed to operate on n-dimensions and used to mimic an eight-dimensional cosine transform. Photographic and numerical results of the application of this algorithm to image data are presented. An area transform, interpolation to disguise block boundaries and bit allocation to coefficients, based on the cosine mimic transform are developed and results presented
The 1993 Space and Earth Science Data Compression Workshop
The Earth Observing System Data and Information System (EOSDIS) is described in terms of its data volume, data rate, and data distribution requirements. Opportunities for data compression in EOSDIS are discussed
Recommended from our members
Time-domain Compressive Beamforming for Medical Ultrasound Imaging
Over the past 10 years, Compressive Sensing has gained a lot of visibility from the medical imaging research community. The most compelling feature for the use of Compressive Sensing is its ability to perform perfect reconstructions of under-sampled signals using l1-minimization. Of course, that counter-intuitive feature has a cost. The lacking information is compensated for by a priori knowledge of the signal under certain mathematical conditions. This technology is currently used in some commercial MRI scanners to increase the acquisition rate hence decreasing discomfort for the patient while increasing patient turnover. For echography, the applications could go from fast 3D echocardiography to simplified, cheaper echography systems.
Real-time ultrasound imaging scanners have been available for nearly 50 years. During these 50 years of existence, much has changed in their architecture, electronics, and technologies. However one component remains present: the beamformer. From analog beamformers to software beamformers, the technology has evolved and brought much diversity to the world of beam formation. Currently, most commercial scanners use several focalized ultrasonic pulses to probe tissue. The time between two consecutive focalized pulses is not compressible, limiting the frame rate. Indeed, one must wait for a pulse to propagate back and forth from the probe to the deepest point imaged before firing a new pulse.
In this work, we propose to outline the development of a novel software beamforming technique that uses Compressive Sensing. Time-domain Compressive Beamforming (t-CBF) uses computational models and regularization to reconstruct de-cluttered ultrasound images. One of the main features of t-CBF is its use of only one transmit wave to insonify the tissue. Single-wave imaging brings high frame rates to the modality, for example allowing a physician to see precisely the movements of the heart walls or valves during a heart cycle. t-CBF takes into account the geometry of the probe as well as its physical parameters to improve resolution and attenuate artifacts commonly seen in single-wave imaging such as side lobes.
In this thesis, we define a mathematical framework for the beamforming of ultrasonic data compatible with Compressive Sensing. Then, we investigate its capabilities on simple simulations in terms of resolution and super-resolution. Finally, we adapt t-CBF to real-life ultrasonic data. In particular, we reconstruct 2D cardiac images at a frame rate 100-fold higher than typical values
Increasing Accuracy Performance through Optimal Feature Extraction Algorithms
This research developed models and techniques to improve the three key modules of popular recognition systems: preprocessing, feature extraction, and classification. Improvements were made in four key areas: processing speed, algorithm complexity, storage space, and accuracy. The focus was on the application areas of the face, traffic sign, and speaker recognition. In the preprocessing module of facial and traffic sign recognition, improvements were made through the utilization of grayscaling and anisotropic diffusion. In the feature extraction module, improvements were made in two different ways; first, through the use of mixed transforms and second through a convolutional neural network (CNN) that best fits specific datasets. The mixed transform system consists of various combinations of the Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), which have a reliable track record for image feature extraction. In terms of the proposed CNN, a neuroevolution system was used to determine the characteristics and layout of a CNN to best extract image features for particular datasets. In the speaker recognition system, the improvement to the feature extraction module comprised of a quantized spectral covariance matrix and a two-dimensional Principal Component Analysis (2DPCA) function. In the classification module, enhancements were made in visual recognition through the use of two neural networks: the multilayer sigmoid and convolutional neural network. Results show that the proposed improvements in the three modules led to an increase in accuracy as well as reduced algorithmic complexity, with corresponding reductions in storage space and processing time
Scaling Multidimensional Inference for Big Structured Data
In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a
world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis
methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory
use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric
models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive
optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable
open the door to new research questions
- …