20 research outputs found

    An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

    Full text link
    We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1: small correction in proof (but not statement of) lemma 3.15; description of Besov spaces in intro and app A clarified (and corrected); smaller pointsize (making 30 instead of 38 pages

    Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications

    Get PDF
    Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging. In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method. In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big. We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term. For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data. For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces. Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency

    Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications

    Get PDF
    Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging. In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method. In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big. We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term. For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data. For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces. Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency

    Advances in Motion Estimators for Applications in Computer Vision

    Get PDF
    abstract: Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today, optical flow fields are utilized to solve problems in various areas such as object detection and tracking, interpolation, visual odometry, etc. In this dissertation, three problems from different areas of computer vision and the solutions that make use of modified optical flow methods are explained. The contributions of this dissertation are approaches and frameworks that introduce i) a new optical flow-based interpolation method to achieve minimally divergent velocimetry data, ii) a framework that improves the accuracy of change detection algorithms in synthetic aperture radar (SAR) images, and iii) a set of new methods to integrate Proton Magnetic Resonance Spectroscopy (1HMRSI) data into threedimensional (3D) neuronavigation systems for tumor biopsies. In the first application an optical flow-based approach for the interpolation of minimally divergent velocimetry data is proposed. The velocimetry data of incompressible fluids contain signals that describe the flow velocity. The approach uses the additional flow velocity information to guide the interpolation process towards reduced divergence in the interpolated data. In the second application a framework that mainly consists of optical flow methods and other image processing and computer vision techniques to improve object extraction from synthetic aperture radar images is proposed. The proposed framework is used for distinguishing between actual motion and detected motion due to misregistration in SAR image sets and it can lead to more accurate and meaningful change detection and improve object extraction from a SAR datasets. In the third application a set of new methods that aim to improve upon the current state-of-the-art in neuronavigation through the use of detailed three-dimensional (3D) 1H-MRSI data are proposed. The result is a progressive form of online MRSI-guided neuronavigation that is demonstrated through phantom validation and clinical application.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems

    Full text link
    Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural sciences. Today, AI has started to advance natural sciences by improving, accelerating, and enabling our understanding of natural phenomena at a wide range of spatial and temporal scales, giving rise to a new area of research known as AI for science (AI4Science). Being an emerging research paradigm, AI4Science is unique in that it is an enormous and highly interdisciplinary area. Thus, a unified and technical treatment of this field is needed yet challenging. This work aims to provide a technically thorough account of a subarea of AI4Science; namely, AI for quantum, atomistic, and continuum systems. These areas aim at understanding the physical world from the subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales and form an important subarea of AI4Science. A unique advantage of focusing on these areas is that they largely share a common set of challenges, thereby allowing a unified and foundational treatment. A key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods. We provide an in-depth yet intuitive account of techniques to achieve equivariance to symmetry transformations. We also discuss other common technical challenges, including explainability, out-of-distribution generalization, knowledge transfer with foundation and large language models, and uncertainty quantification. To facilitate learning and education, we provide categorized lists of resources that we found to be useful. We strive to be thorough and unified and hope this initial effort may trigger more community interests and efforts to further advance AI4Science

    Variational Methods and Numerical Algorithms for Geometry Processing

    Get PDF
    In this work we address the problem of shape partitioning which enables the decomposition of an arbitrary topology object into smaller and more manageable pieces called partitions. Several applications in Computer Aided Design (CAD), Computer Aided Manufactury (CAM) and Finite Element Analysis (FEA) rely on object partitioning that provides a high level insight of the data useful for further processing. In particular, we are interested in 2-manifold partitioning, since the boundaries of tangible physical objects can be mathematically defined by two-dimensional manifolds embedded into three-dimensional Euclidean space. To that aim, a preliminary shape analysis is performed based on shape characterizing scalar/vector functions defined on a closed Riemannian 2-manifold. The detected shape features are used to drive the partitioning process into two directions – a human-based partitioning and a thickness-based partitioning. In particular, we focus on the Shape Diameter Function that recovers volumetric information from the surface thus providing a natural link between the object’s volume and its boundary, we consider the spectral decomposition of suitably-defined affinity matrices which provides multi-dimensional spectral coordinates of the object’s vertices, and we introduce a novel basis of sparse and localized quasi-eigenfunctions of the Laplace-Beltrami operator called Lp Compressed Manifold Modes. The partitioning problem, which can be considered as a particular inverse problem, is formulated as a variational regularization problem whose solution provides the so-called piecewise constant/smooth partitioning function. The functional to be minimized consists of a fidelity term to a given data set and a regularization term which promotes sparsity, such as for example, Lp norm with p ∈ (0, 1) and other parameterized, non-convex penalty functions with positive parameter, which controls the degree of non-convexity. The proposed partitioning variational models, inspired on the well-known Mumford Shah models for recovering piecewise smooth/constant functions, incorporate a non-convex regularizer for minimizing the boundary lengths. The derived non-convex non-smooth optimization problems are solved by efficient numerical algorithms based on Proximal Forward-Backward Splitting and Alternating Directions Method of Multipliers strategies, also employing Convex Non-Convex approaches. Finally, we investigate the application of surface partitioning to patch-based surface quadrangulation. To that aim the 2-manifold is first partitioned into zero-genus patches that capture the object’s arbitrary topology, then for each patch a quad-based minimal surface is created and evolved by a Lagrangian-based PDE evolution model to the original shape to obtain the final semi-regular quad mesh. The evolution is supervised by asymptotically area-uniform tangential redistribution for the quads

    Contribution to Data Science: Time Series, Uncertainty Quantification and Applications

    Get PDF
    Time series analysis is an essential tool in modern world statistical analysis, with a myriad of real data problems having temporal components that need to be studied to gain a better understanding of the temporal dependence structure in the data. For example, in the stock market, it is of significant importance to identify the ups and downs of the stock prices, for which time series analysis is crucial. Most of the existing literature on time series deals with linear time series, or with Gaussianity assumption. However, there are multiple instances where the time series shows nonlinear trends, or when the underlying error structure is non-Gaussian. In such instances, nonlinear time series analysis is essential. That can be achieved by using a nonlinear parametric structure or using nonparametric approaches. In Chapter 2, we have proposed a quadratic prediction procedure that provides a better prediction accuracy when there exists non-linearity or non-Gaussinaity in the time series and a quantification of the amount of prediction gain we obtain using the quadratic prediction. We also provide a characterization of the processes for which the quadratic prediction will always give a better result compared to linear prediction in terms of the bispectra of the underlying process. We have provided ample simulation studies and two real data analyses to substantiate the theoretical results obtained. Chapter 3 deals with polyspectral means, a higher-order version of spectral means, which gives us important insights into a time series under the existence of non-linearity. We have proposed an estimate of the polyspectral mean and derived its asymptotic distribution. We have also proposed a linearity test based on the obtained asymptotic normality result. Finally, we have provided a simulation study and a real-world data analysis to offer possible applications of the polyspectral means in the real-world scenario. The next part of the thesis deals with real data analysis. Chapter 4 is devoted to an election-prediction algorithm, which utilizes hashtag information and the dynamic network structure in social media data and the opinion polls. We proposed two algorithms, one using the network structure (THANOS) and one without (THOS). Both our methods performed better than existing election prediction algorithms. Also, for closely fought elections, the one using the network structure gave much closer predictions than the one without. Chapter 5 involves proposing a bot-detection algorithm for social media data. Inorganic accounts, famously known as bots, are used extensively for spreading malicious information and false propaganda, and it is of significant importance to identify them as quickly as possible. We have extracted several temporal and semantic features and used known machine learning algorithms to identify the inorganic accounts. The final chapter deals with bootstrap in extreme value analysis. Efron’s bootstrap is found to be inconsistent with extreme value theory. It is known that m out of n bootstrap works in this particular scenario when m = o(n). However, not much work has been done in finding the optimal choice of m in the m out of n bootstrap. In Chapter 6, we propose an optimal choice of m which would minimize the convergence rate of the bootstrap. We have given a real-world data analysis using the AQI level of several cities around the world

    Contribution to Data Science: Time Series, Uncertainty Quantification and Applications

    Get PDF
    Time series analysis is an essential tool in modern world statistical analysis, with a myriad of real data problems having temporal components that need to be studied to gain a better understanding of the temporal dependence structure in the data. For example, in the stock market, it is of significant importance to identify the ups and downs of the stock prices, for which time series analysis is crucial. Most of the existing literature on time series deals with linear time series, or with Gaussianity assumption. However, there are multiple instances where the time series shows nonlinear trends, or when the underlying error structure is non-Gaussian. In such instances, nonlinear time series analysis is essential. That can be achieved by using a nonlinear parametric structure or using nonparametric approaches. In Chapter 2, we have proposed a quadratic prediction procedure that provides a better prediction accuracy when there exists non-linearity or non-Gaussinaity in the time series and a quantification of the amount of prediction gain we obtain using the quadratic prediction. We also provide a characterization of the processes for which the quadratic prediction will always give a better result compared to linear prediction in terms of the bispectra of the underlying process. We have provided ample simulation studies and two real data analyses to substantiate the theoretical results obtained. Chapter 3 deals with polyspectral means, a higher-order version of spectral means, which gives us important insights into a time series under the existence of non-linearity. We have proposed an estimate of the polyspectral mean and derived its asymptotic distribution. We have also proposed a linearity test based on the obtained asymptotic normality result. Finally, we have provided a simulation study and a real-world data analysis to offer possible applications of the polyspectral means in the real-world scenario. The next part of the thesis deals with real data analysis. Chapter 4 is devoted to an election-prediction algorithm, which utilizes hashtag information and the dynamic network structure in social media data and the opinion polls. We proposed two algorithms, one using the network structure (THANOS) and one without (THOS). Both our methods performed better than existing election prediction algorithms. Also, for closely fought elections, the one using the network structure gave much closer predictions than the one without. Chapter 5 involves proposing a bot-detection algorithm for social media data. Inorganic accounts, famously known as bots, are used extensively for spreading malicious information and false propaganda, and it is of significant importance to identify them as quickly as possible. We have extracted several temporal and semantic features and used known machine learning algorithms to identify the inorganic accounts. The final chapter deals with bootstrap in extreme value analysis. Efron’s bootstrap is found to be inconsistent with extreme value theory. It is known that m out of n bootstrap works in this particular scenario when m = o(n). However, not much work has been done in finding the optimal choice of m in the m out of n bootstrap. In Chapter 6, we propose an optimal choice of m which would minimize the convergence rate of the bootstrap. We have given a real-world data analysis using the AQI level of several cities around the world
    corecore