44 research outputs found

    Adapting image processing and clustering methods to productive efficiency analysis and benchmarking: A cross disciplinary approach

    Get PDF
    This dissertation explores the interdisciplinary applications of computational methods in quantitative economics. Particularly, this thesis focuses on problems in productive efficiency analysis and benchmarking that are hardly approachable or solvable using conventional methods. In productive efficiency analysis, null or zero values are often produced due to the wrong skewness or low kurtosis of the inefficiency distribution as against the distributional assumption on the inefficiency term. This thesis uses the deconvolution technique, which is traditionally used in image processing for noise removal, to develop a fully non-parametric method for efficiency estimation. Publications 1 and 2 are devoted to this topic, with focus being laid on the cross-sectional case and panel case, respectively. Through Monte-Carlo simulations and empirical applications to Finnish electricity distribution network data and Finnish banking data, the results show that the Richardson-Lucy blind deconvolution method is insensitive to the distributio-nal assumptions, robust to the data noise levels and heteroscedasticity on efficiency estimation. In benchmarking, which could be the next step of productive efficiency analysis, the 'best practice' target may not perform under the same operational environment with the DMU under study. This would render the benchmarks impractical to follow and adversely affects the managers to make the correct decisions on performance improvement of a DMU. This dissertation proposes a clustering-based benchmarking framework in Publication 3. The empirical study on Finnish electricity distribution network reveals that the proposed framework novels not only in its consideration on the differences of the operational environment among DMUs, but also its extreme flexibility. We conducted a comparison analysis on the different combinations of the clustering and efficiency estimation techniques using computational simulations and empirical applications to Finnish electricity distribution network data, based on which Publication 4 specifies an efficient combination for benchmarking in energy regulation.  This dissertation endeavors to solve problems in quantitative economics using interdisciplinary approaches. The methods developed benefit this field and the way how we approach the problems open a new perspective

    Image Recovery Using Partitioned-Separable Paraboloidal Surrogate Coordinate Ascent Algorithms

    Full text link
    Iterative coordinate ascent algorithms have been shown to be useful for image recovery, but are poorly suited to parallel computing due to their sequential nature. This paper presents a new fast converging parallelizable algorithm for image recovery that can be applied to a very broad class of objective functions. This method is based on paraboloidal surrogate functions and a concavity technique. The paraboloidal surrogates simplify the optimization problem. The idea of the concavity technique is to partition pixels into subsets that can be updated in parallel to reduce the computation time. For fast convergence, pixels within each subset are updated sequentially using a coordinate ascent algorithm. The proposed algorithm is guaranteed to monotonically increase the objective function and intrinsically accommodates nonnegativity constraints. A global convergence proof is summarized. Simulation results show that the proposed algorithm requires less elapsed time for convergence than iterative coordinate ascent algorithms. With four parallel processors, the proposed algorithm yields a speedup factor of 3.77 relative to single processor coordinate ascent algorithms for a three-dimensional (3-D) confocal image restoration problem.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86024/1/Fessler72.pd

    Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications

    Get PDF
    Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging. In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method. In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big. We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term. For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data. For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces. Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency

    Improving Range Estimation of a 3D FLASH LADAR via Blind Deconvolution

    Get PDF
    The purpose of this research effort is to improve and characterize range estimation in a three-dimensional FLASH LAser Detection And Ranging (3D FLASH LADAR) by investigating spatial dimension blurring effects. The myriad of emerging applications for 3D FLASH LADAR both as primary and supplemental sensor necessitate superior performance including accurate range estimates. Along with range information, this sensor also provides an imaging or laser vision capability. Consequently, accurate range estimates would also greatly aid in image quality of a target or remote scene under interrogation. Unlike previous efforts, this research accounts for pixel coupling by defining the range image mathematical model as a convolution between the system spatial impulse response and the object (target or remote scene) at a particular range slice. Using this model, improved range estimation is possible by object restoration from the data observations. Object estimation is principally performed by deriving a blind deconvolution Generalized Expectation Maximization (GEM) algorithm with the range determined from the estimated object by a normalized correlation method. Theoretical derivations and simulation results are verified with experimental data of a bar target taken from a 3D FLASH LADAR system in a laboratory environment. Additionally, among other factors, range separation estimation variance is a function of two LADAR design parameters (range sampling interval and transmitted pulse-width), which can be optimized using the expected range resolution between two point sources. Using both CRB theory and an unbiased estimator, an investigation is accomplished that finds the optimal pulse-width for several range sampling scenarios using a range resolution metric

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Variable metric line-search based methods for nonconvex optimization

    Get PDF
    L'obiettivo di questa tesi è quello di proporre nuovi metodi iterativi del prim'ordine per un'ampia classe di problemi di ottimizzazione non convessa, in cui la funzione obiettivo è data dalla somma di un termine differenziabile, eventualmente non convesso, e di uno convesso, eventualmente non differenziabile. Tali problemi sono frequenti in applicazioni scientifiche quali l'elaborazione numerica di immagini e segnali, in cui il primo termine gioca il ruolo di funzione di discrepanza tra il dato osservato e l'oggetto ricostruito, mentre il secondo è il termine di regolarizzazione, volto ad imporre alcune specifiche proprietà sull'oggetto desiderato. Il nostro approccio è duplice: da un lato, i metodi proposti vengono accelerati facendo uso di strategie adattive di selezione dei parametri coinvolti; dall'altro lato, la convergenza di tali metodi viene garantita imponendo, ad ogni iterazione, un'opportuna condizione di sufficiente decrescita della funzione obiettivo. Il nostro primo contributo consiste nella messa a punto di un nuovo metodo di tipo proximal-gradient, che alterna un passo del gradiente sulla parte differenziabile ad uno proximal sulla parte convessa, denominato Variable Metric Inexact Line-search based Algorithm (VMILA). Tale metodo è innovativo da più punti di vista. Innanzitutto, a differenza della maggior parte dei metodi proximal-gradient, VMILA permette di adottare una metrica variabile nel calcolo dell'operatore proximal con estrema libertà di scelta, imponendo soltanto che i parametri coinvolti appartengano a sottoinsiemi limitati degli spazi in cui vengono definiti. In secondo luogo, in VMILA il calcolo del punto proximal viene effettuato tramite un preciso criterio di inesattezza, che può essere concretamente implementato in alcuni casi di interesse. Questo aspetto assume una rilevante importanza ogni qualvolta l'operatore proximal non sia calcolabile in forma chiusa. Infine, le iterate di VMILA sono calcolate tramite una ricerca di linea inesatta lungo la direzione ammissibile e secondo una specifica condizione di sufficiente decrescita di tipo Armijo. Il secondo contributo di questa tesi è proposto in un caso particolare del problema di ottimizzazione precedentemente considerato, in cui si assume che il termine convesso sia dato dalla somma di un numero finito di funzioni indicatrici di insiemi chiusi e convessi. In altre parole, si considera il problema di minimizzare una funzione differenziabile in cui i vincoli sulle incognite hanno una struttura separabile. In letteratura, il metodo classico per affrontare tale problema è senza dubbio il metodo di Gauss-Seidel (GS) non lineare, dove la minimizzazione della funzione obiettivo è ciclicamente alternata su ciascun blocco di variabili del problema. In questa tesi, viene proposta una versione inesatta dello schema GS, denominata Cyclic Block Generalized Gradient Projection (CBGGP) method, in cui la minimizzazione parziale su ciascun blocco di variabili è realizzata mediante un numero finito di passi del metodo del gradiente proiettato. La novità nell'approccio proposto consiste nell'introduzione di metriche non euclidee nel calcolo del gradiente proiettato. Per entrambi i metodi si dimostra, senza alcuna ipotesi di convessità sulla funzione obiettivo, che ciascun punto di accumulazione della successione delle iterate è stazionario. Nel caso di VMILA, è invece possibile dimostrare la convergenza forte delle iterate ad un punto stazionario quando la funzione obiettivo soddisfa la disuguaglianza di Kurdyka-Lojasiewicz. Numerosi test numerici in problemi di elaborazione di immagini, quali la ricostruzione di immagini sfocate e rumorose, la compressione di immagini, la stima di fase in microscopia e la deconvoluzione cieca di immagini in astronomia, danno prova della flessibilità ed efficacia dei metodi proposti.The aim of this thesis is to propose novel iterative first order methods tailored for a wide class of nonconvex nondifferentiable optimization problems, in which the objective function is given by the sum of a differentiable, possibly nonconvex function and a convex, possibly nondifferentiable term. Such problems have become ubiquitous in scientific applications such as image or signal processing, where the first term plays the role of the fit-to-data term, describing the relation between the desired object and the measured data, whereas the second one is the penalty term, aimed at restricting the search of the object itself to those satisfying specific properties. Our approach is twofold: on one hand, we accelerate the proposed methods by making use of suitable adaptive strategies to choose the involved parameters; on the other hand, we ensure convergence by imposing a sufficient decrease condition on the objective function at each iteration. Our first contribution is the development of a novel proximal--gradient method denominated Variable Metric Inexact Line-search based Algorithm (VMILA). The proposed approach is innovative from several points of view. First of all, VMILA allows to adopt a variable metric in the computation of the proximal point with a relative freedom of choice. Indeed the only assumption that we make is that the parameters involved belong to bounded sets. This is unusual with respect to the state-of-the-art proximal-gradient methods, where the parameters are usually chosen by means of a fixed rule or tightly related to the Lipschitz constant of the problem. Second, we introduce an inexactness criterion for computing the proximal point which can be practically implemented in some cases of interest. This aspect assumes a relevant importance whenever the proximal operator is not available in a closed form, which is often the case. Third, the VMILA iterates are computed by performing a line-search along the feasible direction and according to a specific Armijo-like condition, which can be considered as an extension of the classical Armijo rule proposed in the context of differentiable optimization. The second contribution is given for a special instance of the previously considered optimization problem, where the convex term is assumed to be a finite sum of the indicator functions of closed, convex sets. In other words, we consider a problem of constrained differentiable optimization in which the constraints have a separable structure. The most suited method to deal with this problem is undoubtedly the nonlinear Gauss-Seidel (GS) or block coordinate descent method, where the minimization of the objective function is cyclically alternated on each block of variables of the problem. In this thesis, we propose an inexact version of the GS scheme, denominated Cyclic Block Generalized Gradient Projection (CBGGP) method, in which the partial minimization over each block of variables is performed inexactly by means of a fixed number of gradient projection steps. The novelty of the proposed approach consists in the introduction of non Euclidean metrics in the computation of the gradient projection. As for VMILA, the sufficient decrease of the function is imposed by means of a block version of the Armijo line-search. For both methods, we prove that each limit point of the sequence of iterates is stationary, without any convexity assumptions. In the case of VMILA, strong convergence of the iterates to a stationary point is also proved when the objective function satisfies the Kurdyka-Lojasiewicz property. Extensive numerical experience in image processing applications, such as image deblurring and denoising in presence of non-Gaussian noise, image compression, phase estimation and image blind deconvolution, shows the flexibility of our methods in addressing different nonconvex problems, as well as their ability to effectively accelerate the progress towards the solution of the treated problem

    Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications

    Get PDF
    Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging. In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method. In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big. We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term. For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data. For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces. Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency
    corecore