242 research outputs found

    Economic inexact restoration for derivative-free expensive function minimization and applications

    Full text link
    The Inexact Restoration approach has proved to be an adequate tool for handling the problem of minimizing an expensive function within an arbitrary feasible set by using different degrees of precision in the objective function. The Inexact Restoration framework allows one to obtain suitable convergence and complexity results for an approach that rationally combines low- and high-precision evaluations. In the present research, it is recognized that many problems with expensive objective functions are nonsmooth and, sometimes, even discontinuous. Having this in mind, the Inexact Restoration approach is extended to the nonsmooth or discontinuous case. Although optimization phases that rely on smoothness cannot be used in this case, basic convergence and complexity results are recovered. A derivative-free optimization phase is defined and the subproblems that arise at this phase are solved using a regularization approach that take advantage of different notions of stationarity. The new methodology is applied to the problem of reproducing a controlled experiment that mimics the failure of a dam

    A stochastic first-order trust-region method with inexact restoration for finite-sum minimization

    Full text link
    We propose a stochastic first-order trust-region method with inexact function and gradient evaluations for solving finite-sum minimization problems. At each iteration, the function and the gradient are approximated by sampling. The sample size in gradient approximations is smaller than the sample size in function approximations and the latter is determined using a deterministic rule inspired by the inexact restoration method, which allows the decrease of the sample size at some iterations. The trust-region step is then either accepted or rejected using a suitable merit function, which combines the function estimate with a measure of accuracy in the evaluation. We show that the proposed method eventually reaches full precision in evaluating the objective function and we provide a worst-case complexity result on the number of iterations required to achieve full precision. We validate the proposed algorithm on nonconvex binary classification problems showing good performance in terms of cost and accuracy and the important feature that a burdensome tuning of the parameters involved is not required

    Inexact restoration with subsampled trust-region methods for finite-sum minimization

    Full text link
    Convex and nonconvex finite-sum minimization arises in many scientific computing and machine learning applications. Recently, first-order and second-order methods where objective functions, gradients and Hessians are approximated by randomly sampling components of the sum have received great attention. We propose a new trust-region method which employs suitable approximations of the objective function, gradient and Hessian built via random subsampling techniques. The choice of the sample size is deterministic and ruled by the inexact restoration approach. We discuss local and global properties for finding approximate first- and second-order optimal points and function evaluation complexity results. Numerical experience shows that the new procedure is more efficient, in terms of overall computational cost, than the standard trust-region scheme with subsampled Hessians

    Mathematics and Algorithms in Tomography

    Get PDF
    This is the eighth Oberwolfach conference on the mathematics of tomography. Modalities represented at the workshop included X-ray tomography, sonar, radar, seismic imaging, ultrasound, electron microscopy, impedance imaging, photoacoustic tomography, elastography, vector tomography, and texture analysis

    Variational Domain Decomposition For Parallel Image Processing

    Full text link
    Many important techniques in image processing rely on partial differential equation (PDE) problems, which exhibit spatial couplings between the unknowns throughout the whole image plane. Therefore, a straightforward spatial splitting into independent subproblems and subsequent parallel solving aimed at diminishing the total computation time does not lead to the solution of the original problem. Typically, significant errors at the local boundaries between the subproblems occur. For that reason, most of the PDE-based image processing algorithms are not directly amenable to coarse-grained parallel computing, but only to fine-grained parallelism, e.g. on the level of the particular arithmetic operations involved with the specific solving procedure. In contrast, Domain Decomposition (DD) methods provide several different approaches to decompose PDE problems spatially so that the merged local solutions converge to the original, global one. Thus, such methods distinguish between the two main classes of overlapping and non-overlapping methods, referring to the overlap between the adjacent subdomains on which the local problems are defined. Furthermore, the classical DD methods --- studied intensively in the past thirty years --- are primarily applied to linear PDE problems, whereas some of the current important image processing approaches involve solving of nonlinear problems, e.g. Total Variation (TV)-based approaches. Among the linear DD methods, non-overlapping methods are favored, since in general they require significanty fewer data exchanges between the particular processing nodes during the parallel computation and therefore reach a higher scalability. For that reason, the theoretical and empirical focus of this work lies primarily on non-overlapping methods, whereas for the overlapping methods we mainly stay with presenting the most important algorithms. With the linear non-overlapping DD methods, we first concentrate on the theoretical foundation, which serves as basis for gradually deriving the different algorithms thereafter. Although we make a connection between the very early methods on two subdomains and the current two-level methods on arbitrary numbers of subdomains, the experimental studies focus on two prototypical methods being applied to the model problem of estimating the optic flow, at which point different numerical aspects, such as the influence of the number of subdomains on the convergence rate, are explored. In particular, we present results of experiments conducted on a PC-cluster (a distributed memory parallel computer based on low-cost PC hardware for up to 144 processing nodes) which show a very good scalability of non-overlapping DD methods. With respect to nonlinear non-overlapping DD methods, we pursue two distinct approaches, both applied to nonlinear, PDE-based image denoising. The first approach draws upon the theory of optimal control, and has been successfully employed for the domain decomposition of Navier-Stokes equations. The second nonlinear DD approach, on the other hand, relies on convex programming and relies on the decomposition of the corresponding minimization problems. Besides the main subject of parallelization by DD methods, we also investigate the linear model problem of motion estimation itself, namely by proposing and empirically studying a new variational approach for the estimation of turbulent flows in the area of fluid mechanics

    Information theoretic regularization in diffuse optical tomography

    Get PDF
    Diffuse optical tomography (DOT) retrieves the spatially distributed optical characteristics of a medium from external measurements. Recovering these parameters of interest involves solving a non-linear and severely ill-posed inverse problem. In this thesis we propose methods towards the regularization of DOT via the introduction of spatially unregistered, a priori information from alternative high resolution anatomical modalities, using the information theory concepts of joint entropy (JE) and mutual information (MI). Such functionals evaluate the similarity between the reconstructed optical image and the prior image, while bypassing the multi-modality barrier manifested as the incommensurate relation between the gray value representations of corresponding anatomical features in the modalities involved. By introducing structural a priori information in the image reconstruction process, we aim to improve the spatial resolution and quantitative accuracy of the solution. A further condition for the accurate incorporation of a priori information is the establishment of correct alignment between the prior image and the probed anatomy in a common coordinate system. However, limited information regarding the probed anatomy is known prior to the reconstruction process. In this work we explore the potentiality of spatially registering the prior image simultaneously with the solution of the reconstruction process. We provide a thorough explanation of the theory from an imaging perspective, accompanied by preliminary results obtained by numerical simulations as well as experimental data. In addition we compare the performance of MI and JE. Finally, we propose a method for fast joint entropy evaluation and optimization, which we later employ for the information theoretic regularization of DOT. The main areas involved in this thesis are: inverse problems, image reconstruction & regularization, diffuse optical tomography and medical image registration
    • …
    corecore