9 research outputs found

    Transform based image denoising

    Get PDF
    The Image denoising is the retrieval of quality image from the noisy image corrupted by channel noise at the time of transmission. Without denoising process it becomes very tough to carry further analysis on these types of images. In this paper, transform based image denoising techniques are proposed to address these issues for the removal of noise. The flow of work initiated with generation of sub-band coefficients using transform techniques like DCT, DWT, SWT etc. These coefficients are under goes spatial filtering process with order statistic filters like (min, max, median etc.) Then inverse transform is applied on the processes coefficients to generate denoised image. The resultant image is noiseless quality image and this can be used for further analysis

    Wavelet thresholding for multiple noisy images

    Get PDF
    This correspondence addresses the recovery of an image from its multiple noisy copies. The standard method is to compute the weighted average of these copies. Since the wavelet thresholding technique has been shown to effectively denoise a single noisy copy, we consider in this paper combining the two operations of averaging and thresholding. Because thresholding is a nonlinear technique, averaging then thresholding or thresholding then averaging produce different estimators. By modeling the signal wavelet coefficients as Laplacian distributed and the noise as Gaussian, our investigation finds the optimal ordering to depend on the number of available copies and on the signal-to-noise ratio. We then propose thresholds that are nearly optimal under the assumed model for each ordering. With the optimal and near-optimal thresholds, the two methods yield similar performance, and both show considerable improvement over merely averaging

    Rate-distortion optimized geometrical image processing

    Get PDF
    Since geometrical features, like edges, represent one of the most important perceptual information in an image, efficient exploitation of such geometrical information is a key ingredient of many image processing tasks, including compression, denoising and feature extraction. Therefore, the challenge for the image processing community is to design efficient geometrical schemes which can capture the intrinsic geometrical structure of natural images. This thesis focuses on developing computationally efficient tree based algorithms for attaining the optimal rate-distortion (R-D) behavior for certain simple classes of geometrical images, such as piecewise polynomial images with polynomial boundaries. A good approximation of this class allows to develop good approximation and compression schemes for images with strong geometrical features, and as experimental results show, also for real life images. We first investigate both the one dimensional (1-D) and two dimensional (2-D) piecewise polynomials signals. For the 1-D case, our scheme is based on binary tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly and is called prune-join algorithm. This allows to achieve the correct exponentially decaying R-D behavior, D(R) ~ 2-cR, thus improving over classical wavelet schemes. We also show that the computational complexity of the scheme is of O(N logN). We then extend this scheme to the 2-D case using a quadtree, which also achieves an exponentially decaying R-D behavior, for the piecewise polynomial image model, with a low computational cost of O(N logN). Again, the key is an R-D optimized prune and join strategy. We further analyze the R-D performance of the proposed tree algorithms for piecewise smooth signals. We show that the proposed algorithms achieve the oracle like polynomially decaying asymptotic R-D behavior for both the 1-D and 2-D scenarios. Theoretical as well as numerical results show that the proposed schemes outperform wavelet based coders in the 2-D case. We then consider two interesting image processing problems, namely denoising and stereo image compression, in the framework of the tree structured segmentation. For the denoising problem, we present a tree based algorithm which performs denoising by compressing the noisy image and achieves improved visual quality by capturing geometrical features, like edges, of images more precisely compared to wavelet based schemes. We then develop a novel rate-distortion optimized disparity based coding scheme for stereo images. The main novelty of the proposed algorithm is that it performs the joint coding of disparity information and the residual image to achieve better R-D performance in comparison to standard block based stereo image coder

    Generalized Wavelet Thresholding: Estimation and Hypothesis Testing with Applications to Array Comparative Genomic Hybridization

    Full text link
    Wavelets have gained considerable popularity within the statistical arena in the context of nonparametric regression. When modeling data of the form y = f + \epsilon, the objective is to estimate the unknown `true' function f with small risk, based on sampled data y contaminated with random (usually Gaussian) noise \epsilon. Wavelet shrinkage and thresholding techniques have proved to be quite effective in recovering the true function f, particularly when f is spatially inhomogeneous. Recently, Johnstone and Silverman (2005b) proposed using empirical Bayes methods for level-dependent threshold selection in wavelet shrinkage. Using the posterior median estimator, their approach amounts to a random thresholding procedure with impressive mean squared error (MSE) results. At each level, their approach considers a two-component mixture prior for each of the wavelet coefficients independently. This mixture prior inherently assumes that the wavelet coefficients are symmetrically distributed about zero. Depending on the choice of wavelet filter and the interesting attributes of the true function, it may be the case that neither the magnitude nor the number of positive coefficients are equal to the those of the negative coefficients. Inspired by the work of Zhang (2005) and Zhang et al. (2007), this thesis introduces a random generalized thresholding procedure in the wavelet domain that does not require the symmetry assumption; it uses a three-component mixture prior that handles the positive and negative coefficients separately. It is demonstrated that the proposed generalized wavelet thresholding procedure performs quite well when estimating f from a single sampled realization y. As in Johnstone and Silverman (2005b), the performance of the Maximal Overlap Discrete Wavelet Transform (MODWT) is substantially better than that of the standard Discrete Wavelet Transform (DWT) in terms of MSE and visual quality. An additional advantage for MODWT is that it is well-defined for any number of sampled points N, i.e., N need not be a power of two. The proposed procedure also performs well when estimating f from multiple noisy realizations y_i, i = 1,...,n. In most, if not all, of the shrinkage and generalized shrinkage techniques considered, the noise standard deviation is assumed to be known and constant across the length of the function. In reality, it is typically not known and must be estimated. In the single realization setting, the estimate is usually taken to be a constant based on the median absolute deviation of the empirical wavelet coefficients at the finest decomposition level. With multiple realizations, there are more estimation options available. Various estimation options for a constant variance are examined via simulation. The results indicate that three of the six estimates considered are reasonable choices. The case of heterogeneous variances across the length of the function is also briefly explored via simulation. Finally, an inferential procedure is proposed that first removes noise from individual observations via the generalized wavelet thresholding procedure, and then uses newly proposed F-like statistics (Cui et al., 2005; Hwang and Liu, 2006; Zhou, 2007) to compare populations of sampled observations. To demonstrate its applicability, the aforementioned statistical work is applied to datasets generated from Array Comparative Genomic Hybridization (aCGH) experiments

    Deriving probabilistic short-range forecasts from a deterministic high-resolution model

    Get PDF
    In order to take full advantage of short-range forecasts from deterministic high-resolution NWP models, the direct model output must be addressed in a probabilistic framework. A promising approach is mesoscale ensemble prediction. However, its operational use is still hampered by conceptual deficiencies and large computational costs. This study tackles two relevant issues: (1) the representation of model-related forecast uncertainty in mesoscale ensemble prediction systems and (2) the development of post-processing procedures that retrieve additional probabilistic information from a single model simulation. Special emphasis is laid on mesoscale forecast uncertainty of summer precipitation and 2m-temperature in Europe. Source of forecast guidance is the deterministic high-resolution model Lokal-Modell (LM) of the German Weather Service. This study gains more insight into the effect and usefulness of stochastic parametrisation schemes in the representation of short-range forecast uncertainty. A stochastic parametrisation scheme is implemented into the LM in an attempt to simulate the stochastic effect of sub-grid scale processes. Experimental ensembles show that the scheme has a substantial effect on the forecast of precipitation amount. However, objective verification reveals that the ensemble does not attain better forecast goodness than a single LM simulation. Urgent issues for future research are identified. In the context of statistical post-processing, two schemes are designed: the neighbourhood method and wavelet smoothing. Both approaches fall under the framework of estimating a large array of statistical parameters on the basis of a single realisation on each parameter. The neighbourhood method is based on the notion of spatio-temporal ergodicity including explicit corrections for enhanced predictability from topographic forcing. The neighbourhood method derives estimates of quantiles, exceedance probabilities and expected values at each grid point of the LM. If the post-processed precipitation forecast is formulated in terms of probabilities or quantiles, it attains clear superiority in comparison to the raw model output. Wavelet smoothing originates from the field of image denoising and includes concepts of multiresolution analysis and non-parametric regression. In this study, the method is used to produce estimates of the expected value, but it may be easily extended to the additional estimation of exceedance probabilities. Wavelet smoothing is not only computationally more efficient than the neighbourhood method, but automatically adapts the amount of spatial smoothing to local properties of the underlying data. The method apparently detects deterministically predictable temperature patterns on the basis of statistical guidance only

    Experimentelle & numerische Untersuchung des Pulver- & Aerosolverhaltens in einer Luftströmung

    Get PDF
    In dieser Arbeit wird die Entstehung eines Aerosols aus Pulver sowohl numerisch, als auch experimentell untersucht. Ein Aerosol ist eine Suspension von festen oder flĂŒssigen Partikeln (diskrete Phase) in einem Fluid (kontinuierliche Phase). Da Aerosole in vielen wissenschaftlichen und technischen Bereichen (wie in der Meteorologie, Pharmazie, Biologie und Physik) von immer grĂ¶ĂŸerer Bedeutung werden, wird es immer wichtiger, ihr Verhalten vorhersagen zu können. Hierzu wurden in dieser Arbeit die Interaktionen zwischen beiden Phasen untersucht, wobei die Interaktion zwischen den diskreten Partikeln und die zwischen den Partikeln und dem Fluid theoretisch betrachtet wurden. Auf Basis dieser Betrachtung wurde das, in einem kommerziellen computational fluid dynamic (CFD) Löser bereits integrierte Partikelmodell so erweitert, dass hohe Partikeldichten numerisch betrachtet werden konnten, wie sie z.B. in Pulvern auftreten. Um das numerisch berechnete Aerosolverhalten des entwickelten Modells mit dem Verhalten von realen Aerosolen zu vergleichen, muss das zu simulierende Pulver zuvor korrekt charakterisiert werden. Daher wurden im Rahmen dieser Arbeit verschiedene Experimente zur Charakterisierung von vier verschiedenen Pulvern durchgefĂŒhrt. Hierzu wurden zum einen die PartikelgrĂ¶ĂŸen und deren Form ĂŒber ein automatisiertes optisches Verfahren mit Hilfe eines Mikroskops gemessen. Zum anderen wurden die tangentialen ReibungskrĂ€fte zwischen den Partikeln bestimmt. Hierbei wurden der SchĂŒttkegel zur Bestimmung der Haftreibung und eine Scherzelle zur Bestimmung des Gleitreibungsverhaltens verwendet, wobei auch der stick-slip-Bereich zwischen Haft- und Gleitreibung nĂ€her charakterisiert wurde. Zudem sollte auch die tangentiale Bewegung des Rollens untersucht werden, die durch die Rollreibung abgebremst wird. Hierzu wurde eine schiefe Ebene entwickelt, mit deren Hilfe es möglich ist, ĂŒber die Lawinengeschwindigkeit des Pulvers auf die Rollreibung zu schließen. Um die Ergebnisse der numerischen Berechnung eines Aerosols experimentell bestĂ€tigen zu können, wurden in einem Experiment die Trajektorien der Partikel im Aerosol aufgezeichnet und mit denen der berechneten Partikel verglichen. Hierzu wurde ein experimenteller Aufbau entwickelt, mit dem die Partikeldynamik in einem Windkanal durch die Aufnahme von mehrfachbelichteten Bildern mit Hilfe einer Hochgeschwindigkeitskamera untersucht werden konnte. Zur Mehrfachbeleuchtung wurde ein Laser verwendet, der nach dem master oscillation power amplifier (MOPA)-Prinzip funktioniert und beliebige Lichtpulsformen erzeugen kann. ZusĂ€tzlich wurden in diesen Experimenten alle Aerosolpartikel aus dem Windkanal mit einem virtuellen Impaktor aus der Hauptströmung getrennt und mit einem optischen PartikelzĂ€hler gezĂ€hlt. Anschließend wurden die Dispersion von einer PulverschĂŒttung und das Partikelverhalten in einer DĂŒse und bei der Wandimpaktion experimentell und numerisch untersucht. Dabei haben sich bereits erste Übereinstimmungen zwischen dem numerischen Modell und den Experimenten gezeigt. Ein deutlicher Unterschied konnte bei der AgglomeratenstabilitĂ€t beobachtet werden, die im numerischen Modell noch zu gering war und deshalb kĂŒnftig weitere KrĂ€fte, beispielsweise KapillarkrĂ€fte hinzugezogen werden sollten. Zudem konnte aus den Experimenten ein tieferes VerstĂ€ndnis der Dispersion von Partikeln aus einer SchĂŒttung gewonnen werden. Es konnte somit gezeigt werden, dass es mit diesem experimentellen Aufbau und der numerischen Berechnung mit einem komplexeren Partikelmodell möglich ist, ein tieferes VerstĂ€ndnis des Partikelverhaltens zu gewinnen, das in vielen wissenschaftlichen und technischen Gebieten von großer Bedeutung ist
    corecore