122 research outputs found

    VI Workshop on Computational Data Analysis and Numerical Methods: Book of Abstracts

    Get PDF
    The VI Workshop on Computational Data Analysis and Numerical Methods (WCDANM) is going to be held on June 27-29, 2019, in the Department of Mathematics of the University of Beira Interior (UBI), Covilhã, Portugal and it is a unique opportunity to disseminate scientific research related to the areas of Mathematics in general, with particular relevance to the areas of Computational Data Analysis and Numerical Methods in theoretical and/or practical field, using new techniques, giving especial emphasis to applications in Medicine, Biology, Biotechnology, Engineering, Industry, Environmental Sciences, Finance, Insurance, Management and Administration. The meeting will provide a forum for discussion and debate of ideas with interest to the scientific community in general. With this meeting new scientific collaborations among colleagues, namely new collaborations in Masters and PhD projects are expected. The event is open to the entire scientific community (with or without communication/poster)

    A regularization approach for reconstruction and visualization of 3-D data

    Get PDF
    Esta tesis trata sobre reconstrucción de superficies a partir de imágenes de rango utilizando algunas extensiones de la Regularización de Tikhonov, que produce Splines aplicables a datos en n dimensiones. La idea central es que estos splines se pueden obtener mediante la teoría de regularización, utilizando un equilibrio entre la suavidad y la fidelidad a los datos, por tanto, serán aplicables tanto en la interpolación como en la aproximación de datos exactos o ruidosos. En esta tesis proponemos un enfoque variacional que incluye los datos e información a priori acerca de la solución, dada en forma de funcionales. Solucionamos problemas de optimización que resultan ser una extensión de la teoría de Tikhonov, con el propósito de incluir funcionales con propiedades locales y globales que pueden ser ajustadas mediante parámetros de regularización. El a priori es analizado en términos de las propiedades físicas y geométricas de los funcionales para luego ser agregados a la formulación variacional. Los resultados obtenidos se prueban con datos para reconstrucción de superficies, mostrando notables propiedades de reproducción y aproximación. En particular, utilizamos la reconstrucción de superficies para ilustrar las aplicaciones prácticas, pero nuestro enfoque tiene muchas más aplicaciones. En el centro de nuestra propuesta esta la teoría general de problemas inversos y las aplicaciones de algunas ideas provenientes del análisis funcional. Los splines que obtenemos son combinaciones lineales de las soluciones fundamentales de ciertos operadores en derivadas parciales, frecuentes en la teoría de la elasticidad y no se hace ninguna suposición previa sobre el modelo estadístico de los datos de entrada, de manera que se pueden tomar en términos de una inferencia estadística no paramétrica. Estos splines son implementables en una forma muy estable y se pueden aplicar en problemas de interpolación y suavizado. / Abstract: This thesis is about surface reconstruction from range images using some extensions of Tikhonov regularization that produces splines applicable on n-dimensional data. The central idea is that these splines can be obtained by regularization theory, using a trade-off between fidelity to data and smoothness properties; as a consequence, they are applicable both in interpolation and approximation of exact or noisy data. We propose a variational framework that includes data and a priori information about the solution, given in the form of functionals. We solve optimization problems which are extensions of Tikhonov theory, in order to include functionals with local and global features that can be tuned by regularization parameters. The a priori is thought in terms of geometric and physical properties of functionals and then added to the variational formulation. The results obtained are tested on data for surface reconstruction, showing remarkable reproducing and approximating properties. In this case we use surface reconstruction to illustrate practical applications; nevertheless, our approach has many other applications. In the core of our approach is the general theory of inverse problems and the application of some abstract ideas from functional analysis. The splines obtained are linear combinations of certain fundamental solutions of partial differential operators from elasticity theory and no prior assumption is made on a statistical model for the input data, so it can be thought in terms of nonparametric statistical inference. They are implementable in a very stable form and can be applied for both interpolation and smoothing problems.Doctorad

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    Motion blur removal from photographs

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 135-143).One of the long-standing challenges in photography is motion blur. Blur artifacts are generated from relative motion between a camera and a scene during exposure. While blur can be reduced by using a shorter exposure, this comes at an unavoidable trade-off with increased noise. Therefore, it is desirable to remove blur computationally. To remove blur, we need to (i) estimate how the image is blurred (i.e. the blur kernel or the point-spread function) and (ii) restore a natural looking image through deconvolution. Blur kernel estimation is challenging because the algorithm needs to distinguish the correct imageblur pair from incorrect ones that can also adequately explain the blurred image. Deconvolution is also difficult because the algorithm needs to restore high frequency image contents attenuated by blur. In this dissertation, we address a few aspects of these challenges. We introduce an insight that a blur kernel can be estimated by analyzing edges in a blurred photograph. Edge profiles in a blurred image encode projections of the blur kernel, from which we can recover the blur using the inverse Radon transform. This method is computationally attractive and is well suited to images with many edges. Blurred edge profiles can also serve as additional cues for existing kernel estimation algorithms. We introduce a method to integrate this information into a maximum-a-posteriori kernel estimation framework, and show its benefits. Deconvolution algorithms restore information attenuated by blur using an image prior that exploits a heavy-tailed gradient profile of natural images. We show, however, that such a sparse prior does not accurately model textures, thereby degrading texture renditions in restored images. To address this issue, we introduce a content-aware image prior that adapts its characteristics to local textures. The adapted image prior improves the quality of textures in restored 6 images. Sometimes even the content-aware image prior may be insufficient for restoring rich textures. This issue can be addressed by matching the restored image's gradient distribution to its original image's gradient distribution, which is estimated directly from the blurred image. This new image deconvolution technique called iterative distribution reweighting (IDR) improves the visual realism of reconstructed images. Subject motion can also cause blur. Removing subject motion blur is especially challenging because the blur is often spatially variant. In this dissertation, we address a restricted class of subject motion blur: the subject moves at a constant velocity locally. We design a new computational camera that improves the local motion estimation and, at the same time, reduces the image information loss due to blur.by Taeg Sang Cho.Ph.D

    Probabilistic Numerical Linear Algebra for Machine Learning

    Get PDF
    Machine learning models are becoming increasingly essential in domains where critical decisions must be made under uncertainty, such as in public policy, medicine or robotics. For a model to be useful for decision-making, it must convey a degree of certainty in its predictions. Bayesian models are well-suited to such settings due to their principled uncertainty quantification, given a set of assumptions about the problem and data-generating process. While in theory, inference in a Bayesian model is fully specified, in practice, numerical approximations have a significant impact on the resulting posterior. Therefore, model-based decisions are not just determined by the data but also by the numerical method. This begs the question of how we can account for the adverse impact of numerical approximations on inference. Arguably, the most common numerical task in scientific computing is the solution of linear systems, which arise in probabilistic inference, graph theory, differential equations and optimization. In machine learning, these systems are typically large-scale, subject to noise and arise from generative processes. These unique characteristics call for specialized solvers. In this thesis, we propose a class of probabilistic linear solvers, which infer the solution to a linear system and can be interpreted as learning algorithms themselves. Importantly, they can leverage problem structure and propagate their error to the prediction of the underlying probabilistic model. Next, we apply such solvers to accelerate Gaussian process inference. While Gaussian processes are a principled and flexible model class, for large datasets inference is computationally prohibitive both in time and memory due to the required computations with the kernel matrix. We show that by approximating the posterior with a probabilistic linear solver, we can invest an arbitrarily small amount of computation and still obtain a provably coherent prediction that quantifies uncertainty exactly. Finally, we demonstrate that Gaussian process hyperparameter optimization can similarly be accelerated by leveraging structural prior knowledge in the model via preconditioning of iterative methods. Combined with modern parallel hardware, this enables training Gaussian process models on datasets with hundreds of thousands of data points. In summary, we demonstrate that interpreting numerical methods in linear algebra as probabilistic learning algorithms unlocks significant performance improvements for Gaussian process models. Crucially, we show how to account for the impact of numerical approximations on model predictions via uncertainty quantification. This enables an explicit trade-off between computational resources and confidence in a prediction. The techniques developed in this thesis have advanced the understanding of probabilistic linear solvers, they have shifted the goalposts of what can be expected from Gaussian process approximations and they have defined the way large-scale Gaussian process hyperparameter optimization is performed in GPyTorch, arguably the most popular library for Gaussian processes in Python
    • …
    corecore