196 research outputs found

    Blind Image Denoising using Supervised and Unsupervised Learning

    Get PDF
    Image denoising is an important problem in image processing and computer vision. In real-world applications, denoising is often a pre-processing step (so-called low-level vision task) before image segmentation, object detection, and recognition at higher levels. Traditional image denoising algorithms often make idealistic assumptions with the noise (e.g., additive white Gaussian or Poisson). However, the noise in the real-world images such as high-ISO photos and microscopic fluorescence images are more complex. Accordingly, the performance of those traditional approaches degrades rapidly on real-world data. Such blind image denoising has remained an open problem in the literature. In this project, we report two competing approaches toward blind image denoising: supervised and unsupervised learning. We report the principles, performance, differences, merits, and technical potential of a few blind denoising algorithms. Supervised learning is a regression model like CNN with a large number of pairs of corrupted images and clean images. This feed-forward convolution neural network separates noise from the image. The reason for using CNN is its deep architecture for exploiting image characteristics, possible parallel computation with modern powerful GPU’s and advances in regularization and learning methods to train. The integration of residual learning and batch normalization is effective in speeding up the training and improving the denoising performance. Here we apply basic statistical reasoning to signaling reconstruction to map corrupted observations to clean targets Recently, few deep learning algorithms have been investigated that do not require ground truth training images. Noise2Noise is an unsupervised training method created for various applications including denoising with Gaussian, Poisson noise. In the N2N model, we observe that we can often learn to turn bad images to good images just by looking at bad images. An experimental study is conducted on practical properties of noisy-target training at performance levels close to using the clean target data. Further, Noise2Void(N2V) is a self-supervised method that takes one step further. This is method does not require clean image data nor noisy image data for training. It is directly trained on the current image that is to be denoised where other methods cannot do it. This is useful for datasets where we cannot find either a noisy dataset or a pair of clean images for training i.e., biomedical image data

    Motion blur in digital images - analys, detection and correction of motion blur in photogrammetry

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This thesis proves the negative affect that blurred images have on photogrammetric processing. It shows that small amounts of blur do have serious impacts on target detection and that it slows down processing speed due to the requirement of human intervention. Larger blur can make an image completely unusable and needs to be excluded from processing. To exclude images out of large image datasets an algorithm was developed. The newly developed method makes it possible to detect blur caused by linear camera displacement. The method is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values of the same dataset. This algorithm enables the exclusion of blurred images and subsequently allows photogrammetric processing without them. However, it is also possible to use deblurring techniques to restor blurred images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears muddy and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images for deblurring may limit the application. Another method to enhance the image is the unsharp mask method, which improves images significantly and makes photogrammetric processing more successful. However, deblurring of images needs to focus on geometric correct deblurring to assure geometric correct measurements. Furthermore, a novel edge shifting approach was developed which aims to do geometrically correct deblurring. The idea of edge shifting appears to be promising but requires more advanced programming
    • …
    corecore