2,640 research outputs found
Image Restoration Using Joint Statistical Modeling in Space-Transform Domain
This paper presents a novel strategy for high-fidelity image restoration by
characterizing both local smoothness and nonlocal self-similarity of natural
images in a unified statistical manner. The main contributions are three-folds.
First, from the perspective of image statistics, a joint statistical modeling
(JSM) in an adaptive hybrid space-transform domain is established, which offers
a powerful mechanism of combining local smoothness and nonlocal self-similarity
simultaneously to ensure a more reliable and robust estimation. Second, a new
form of minimization functional for solving image inverse problem is formulated
using JSM under regularization-based framework. Finally, in order to make JSM
tractable and robust, a new Split-Bregman based algorithm is developed to
efficiently solve the above severely underdetermined inverse problem associated
with theoretical proof of convergence. Extensive experiments on image
inpainting, image deblurring and mixed Gaussian plus salt-and-pepper noise
removal applications verify the effectiveness of the proposed algorithm.Comment: 14 pages, 18 figures, 7 Tables, to be published in IEEE Transactions
on Circuits System and Video Technology (TCSVT). High resolution pdf version
and Code can be found at: http://idm.pku.edu.cn/staff/zhangjian/IRJSM
An enhanced fletcher-reeves-like conjugate gradient methods for image restoration
Noise is an unavoidable aspect of modern camera technology, causing a decline in the overall visual quality of the images. Efforts are underway to diminish noise without compromising essential image features like edges, corners, and other intricate structures. Numerous techniques have already been suggested by many researchers for noise reduction, each with its unique set of benefits and drawbacks. Denoising images is a basic challenge in image processing. We describe a two-phase approach for removing impulse noise in this study. The adaptive median filter (AMF) for salt-and-pepper noise identifies noise candidates in the first phase. The second step minimizes an edge-preserving regularization function using a novel hybrid conjugate gradient approach. To generate the new improved search direction, the new algorithm takes advantage of two well-known successful conjugate gradient techniques. The descent property and global convergence are proven for the new methods. The obtained numerical results reveal that, when applied to image restoration, the new algorithms are superior to the classical fletcher reeves (FR) method in the same domain in terms of maintaining image quality and efficiency
Multiplicative Noise Removal Using L1 Fidelity on Frame Coefficients
We address the denoising of images contaminated with multiplicative noise,
e.g. speckle noise. Classical ways to solve such problems are filtering,
statistical (Bayesian) methods, variational methods, and methods that convert
the multiplicative noise into additive noise (using a logarithmic function),
shrinkage of the coefficients of the log-image data in a wavelet basis or in a
frame, and transform back the result using an exponential function. We propose
a method composed of several stages: we use the log-image data and apply a
reasonable under-optimal hard-thresholding on its curvelet transform; then we
apply a variational method where we minimize a specialized criterion composed
of an data-fitting to the thresholded coefficients and a Total
Variation regularization (TV) term in the image domain; the restored image is
an exponential of the obtained minimizer, weighted in a way that the mean of
the original image is preserved. Our restored images combine the advantages of
shrinkage and variational methods and avoid their main drawbacks. For the
minimization stage, we propose a properly adapted fast minimization scheme
based on Douglas-Rachford splitting. The existence of a minimizer of our
specialized criterion being proven, we demonstrate the convergence of the
minimization scheme. The obtained numerical results outperform the main
alternative methods
Solving Unconstrained Optimization Problems by a New Conjugate Gradient Method with Sufficient Descent Property
There have been some conjugate gradient methods with strong convergence but numerical instability and conversely‎. Improving these methods is an interesting idea to produce new methods with both strong convergence and‎‏ ‎numerical stability‎. ‎In this paper‎, ‎a new hybrid conjugate gradient method is introduced based on the Fletcher ‎formula (CD) with strong convergence and the Liu and Storey formula (LS) with good numerical results‎. ‎New directions satisfy the sufficient descent property‎, ‎independent of line search‎. ‎Under some mild assumptions‎, ‎the global convergence of new hybrid method is proved‎. ‎Numerical results on unconstrained CUTEst test problems show that the new algorithm is ‎very robust and efficient‎
A Framework for Directional and Higher-Order Reconstruction in Photoacoustic Tomography
Photoacoustic tomography is a hybrid imaging technique that combines high
optical tissue contrast with high ultrasound resolution. Direct reconstruction
methods such as filtered backprojection, time reversal and least squares suffer
from curved line artefacts and blurring, especially in case of limited angles
or strong noise. In recent years, there has been great interest in regularised
iterative methods. These methods employ prior knowledge on the image to provide
higher quality reconstructions. However, easy comparisons between regularisers
and their properties are limited, since many tomography implementations heavily
rely on the specific regulariser chosen. To overcome this bottleneck, we
present a modular reconstruction framework for photoacoustic tomography. It
enables easy comparisons between regularisers with different properties, e.g.
nonlinear, higher-order or directional. We solve the underlying minimisation
problem with an efficient first-order primal-dual algorithm. Convergence rates
are optimised by choosing an operator dependent preconditioning strategy. Our
reconstruction methods are tested on challenging 2D synthetic and experimental
data sets. They outperform direct reconstruction approaches for strong noise
levels and limited angle measurements, offering immediate benefits in terms of
acquisition time and quality. This work provides a basic platform for the
investigation of future advanced regularisation methods in photoacoustic
tomography.Comment: submitted to "Physics in Medicine and Biology". Changes from v1 to
v2: regularisation with directional wavelet has been added; new experimental
tests have been include
Mitigation of impulsive noise in OFDM channels using ANN technique
Abstract: Orthogonal frequency division multiplexer (OFDM) is a recent modulation scheme used to transmit signals across power line communication (PLC) channel due to its robustness against some known PLC problems. However, this scheme is greatly affected by the impulsive noise (IN) and often causes corruption with the transmitted bits. Different impulsive noise error correcting methods have been introduced and used to remove impulsive noise in OFDM systems. However, these techniques suffer some limitations and require much signal to noise ratio (SNR) power to operate. In this paper, an approach of designing an effective impulsive-noise error-correcting technique was introduced using three-known artificial neural network techniques (Levenberg-Marquardt, Scaled conjugate gradient, and Bayesian regularization). Findings suggest that both Bayesian regularization and Levenberg-Marquardt ANN techniques can be used to effectively remove the impulsive noise present in an OFDM channel and using the least SNR power
A Review of Fault Diagnosing Methods in Power Transmission Systems
Transient stability is important in power systems. Disturbances like faults need to be segregated to restore transient stability. A comprehensive review of fault diagnosing methods in the power transmission system is presented in this paper. Typically, voltage and current samples are deployed for analysis. Three tasks/topics; fault detection, classification, and location are presented separately to convey a more logical and comprehensive understanding of the concepts. Feature extractions, transformations with dimensionality reduction methods are discussed. Fault classification and location techniques largely use artificial intelligence (AI) and signal processing methods. After the discussion of overall methods and concepts, advancements and future aspects are discussed. Generalized strengths and weaknesses of different AI and machine learning-based algorithms are assessed. A comparison of different fault detection, classification, and location methods is also presented considering features, inputs, complexity, system used and results. This paper may serve as a guideline for the researchers to understand different methods and techniques in this field
- …