229 research outputs found

    Adapting image processing and clustering methods to productive efficiency analysis and benchmarking: A cross disciplinary approach

    Get PDF
    This dissertation explores the interdisciplinary applications of computational methods in quantitative economics. Particularly, this thesis focuses on problems in productive efficiency analysis and benchmarking that are hardly approachable or solvable using conventional methods.Ā In productive efficiency analysis, null or zero values are often produced due to the wrong skewness or low kurtosis of the inefficiency distribution as against the distributional assumption on the inefficiency term. This thesis uses the deconvolution technique, which is traditionally used in image processing for noise removal, to develop a fully non-parametric method for efficiency estimation. Publications 1 and 2 are devoted to this topic, with focus being laid on the cross-sectional case and panel case, respectively. Through Monte-Carlo simulations and empirical applications to Finnish electricity distribution network data and Finnish banking data, the results show that the Richardson-Lucy blind deconvolution method is insensitive to the distributio-nal assumptions, robust to the data noise levels and heteroscedasticity on efficiency estimation.Ā In benchmarking, which could be the next step of productive efficiency analysis, the 'best practice' target may not perform under the same operational environment with the DMU under study. This would render the benchmarks impractical to follow and adversely affects the managers to make the correct decisions on performance improvement of a DMU. This dissertation proposes a clustering-based benchmarking framework in Publication 3. The empirical study on Finnish electricity distribution network reveals that the proposed framework novels not only in its consideration on the differences of the operational environment among DMUs, but also its extreme flexibility. We conducted a comparison analysis on the different combinations of the clustering and efficiency estimation techniques using computational simulations and empirical applications to Finnish electricity distribution network data, based on which Publication 4 specifies an efficient combination for benchmarking in energy regulation.Ā Ā This dissertation endeavors to solve problems in quantitative economics using interdisciplinary approaches. The methods developed benefit this fieldĀ and the way how we approach the problemsĀ open a new perspective

    Efficient Methodologies for Single-Image Blind Deconvolution and Deblurring

    Get PDF

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Application of regularized Richardsonā€“Lucy algorithm for deconvolution of confocal microscopy images

    Get PDF
    Although confocal microscopes have considerably smaller contribution of out-of-focus light than widefield microscopes, the confocal images can still be enhanced mathematically if the optical and data acquisition effects are accounted for. For that, several deconvolution algorithms have been proposed. As a practical solution, maximum-likelihood algorithms with regularization have been used. However, the choice of regularization parameters is often unknown although it has considerable effect on the result of deconvolution process. The aims of this work were: to find good estimates of deconvolution parameters; and to develop an open source software package that would allow testing different deconvolution algorithms and that would be easy to use in practice. Here, Richardsonā€“Lucy algorithm has been implemented together with the total variation regularization in an open source software package IOCBio Microscope. The influence of total variation regularization on deconvolution process is determined by one parameter. We derived a formula to estimate this regularization parameter automatically from the images as the algorithm progresses. To assess the effectiveness of this algorithm, synthetic images were composed on the basis of confocal images of rat cardiomyocytes. From the analysis of deconvolved results, we have determined under which conditions our estimation of total variation regularization parameter gives good results. The estimated total variation regularization parameter can be monitored during deconvolution process and used as a stopping criterion. An inverse relation between the optimal regularization parameter and the peak signal-to-noise ratio of an image is shown. Finally, we demonstrate the use of the developed software by deconvolving images of rat cardiomyocytes with stained mitochondria and sarcolemma obtained by confocal and widefield microscopes

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Selected Topics in Bayesian Image/Video Processing

    Get PDF
    In this dissertation, three problems in image deblurring, inpainting and virtual content insertion are solved in a Bayesian framework.;Camera shake, motion or defocus during exposure leads to image blur. Single image deblurring has achieved remarkable results by solving a MAP problem, but there is no perfect solution due to inaccurate image prior and estimator. In the first part, a new non-blind deconvolution algorithm is proposed. The image prior is represented by a Gaussian Scale Mixture(GSM) model, which is estimated from non-blurry images as training data. Our experimental results on a total twelve natural images have shown that more details are restored than previous deblurring algorithms.;In augmented reality, it is a challenging problem to insert virtual content in video streams by blending it with spatial and temporal information. A generic virtual content insertion (VCI) system is introduced in the second part. To the best of my knowledge, it is the first successful system to insert content on the building facades from street view video streams. Without knowing camera positions, the geometry model of a building facade is established by using a detection and tracking combined strategy. Moreover, motion stabilization, dynamic registration and color harmonization contribute to the excellent augmented performance in this automatic VCI system.;Coding efficiency is an important objective in video coding. In recent years, video coding standards have been developing by adding new tools. However, it costs numerous modifications in the complex coding systems. Therefore, it is desirable to consider alternative standard-compliant approaches without modifying the codec structures. In the third part, an exemplar-based data pruning video compression scheme for intra frame is introduced. Data pruning is used as a pre-processing tool to remove part of video data before they are encoded. At the decoder, missing data is reconstructed by a sparse linear combination of similar patches. The novelty is to create a patch library to exploit similarity of patches. The scheme achieves an average 4% bit rate reduction on some high definition videos

    Blind Image Deconvolution using Approximate Greatest Common Divisor and Approximate Polynomial Factorisation

    Get PDF
    Images play a signiļ¬cant and important role in diverse areas of everyday modern life. Examples of the areas where the use of images is routine include medicine, forensic investigations, engineering applications and astronomical science. The procedures and methods that depend on image processing would beneļ¬t considerably from images that are free of blur. Most images are unfortunately aļ¬€ected by noise and blur that result from the practical limitations of image sourcing systems. The blurring and noise eļ¬€ects render the image less useful. An eļ¬ƒcient method for image restoration is hence important for many applications. Restoration of true images from blurred images is the inverse of the naturally occurring problem of true image convolution through a blurring function. The deconvolution of images from blurred images is a non-trivial task. One challenge is that the computation of the mathematical function that represents the blurring process, which is known as the point spread function (PSF), is an ill-posed problem, i.e. an inļ¬nite number of solutions are possible for given inexact data. The blind image deconvolution (BID) problem is the central subject of this thesis. There are a number of approaches for solving the BID problem, including statistical methods and linear algebraic methods. The approach adopted in this research study for solving this problem falls within the class of linear algebraic methods. Polynomial linear algebra oļ¬€ers a way of computing the PSF size and its components without requiring any prior knowledge about the true image and the blurring PSF. This research study has developed a BID method for image restoration based on the approximate greatest common divisor (AGCD) algorithms, speciļ¬cally, the approximate polynomial factorization (APF) algorithm of two polynomials. The developed method uses the Sylvester resultant matrix algorithm in the computation of the AGCD and the QR decomposition for computing the degree of the AGCD. It is shown that the AGCD is equal to the PSF and the deblurred image can be computed from the coprime polynomials. In practice, the PSF can be spatially variant or invariant. PSF spatial invariance means that the blurred image pixels are the convolution of the true image pixels and the same PSF. Some of the PSF bivariate functions, in particular, separable functions, can be further simpliļ¬ed as the multiplication of two univariate polynomials. This research study is focused on the invariant separable and non-separable PSF cases. The performance of state-of-the-art image restoration methods varies in terms of computational speed and accuracy. In addition, most of these methods require prior knowledge about the true image and the blurring function, which in a signiļ¬cant number of applications is an impractical requirement. The development of image restoration methods that require no prior knowledge about the true image and the blurring functions is hence desirable. Previous attempts at developing BID methods resulted in methods that have a robust performance against noise perturbations; however, their good performance is limited to blurring functions of small size. In addition, even for blurring functions of small size, these methods require the size of the blurring functions to be known and an estimate of the noise level to be present in the blurred image. The developed method has better performance than all the other state-of-the-art methods, in particular, it determines the correct size and coeļ¬ƒcients of the PSF and then uses it to recover the original image. It does not require any prior knowledge about the PSF, which is a prerequisite for all the other methods

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented
    • ā€¦
    corecore