280 research outputs found

    Image Outlier filtering (IOF) : A Machine learning based DWT optimization Approach

    Get PDF
    In this paper an image outlier technique, which is a hybrid model called SVM regression based DWT optimization have been introduced. Outlier filtering of RGB image is using the DWT model such as Optimal-HAAR wavelet changeover (OHC), which optimized by the Least Square Support Vector Machine (LS-SVM) . The LS-SVM regression predicts hyper coefficients obtained by using QPSO model. The mathematical models are discussed in brief in this paper: (i) OHC which results in better performance and reduces the complexity resulting in (Optimized FHT). (ii) QPSO by replacing the least good particle with the new best obtained particle resulting in 201C;Optimized Least Significant Particle based QPSO201D; (OLSP-QPSO). On comparing the proposed cross model of optimizing DWT by LS-SVM to perform oulier filtering with linear and nonlinear noise removal standards

    Novel Sparse Recovery Algorithms for 3D Debris Localization using Rotating Point Spread Function Imagery

    Full text link
    An optical imager that exploits off-center image rotation to encode both the lateral and depth coordinates of point sources in a single snapshot can perform 3D localization and tracking of space debris. When actively illuminated, unresolved space debris, which can be regarded as a swarm of point sources, can scatter a fraction of laser irradiance back into the imaging sensor. Determining the source locations and fluxes is a large-scale sparse 3D inverse problem, for which we have developed efficient and effective algorithms based on sparse recovery using non-convex optimization. Numerical simulations illustrate the efficiency and stability of the algorithms.Comment: 16 pages. arXiv admin note: substantial text overlap with arXiv:1804.0400

    Sparse Identification of Truncation Errors

    Full text link
    This work presents a data-driven approach to the identification of spatial and temporal truncation errors for linear and nonlinear discretization schemes of Partial Differential Equations (PDEs). Motivated by the central role of truncation errors, for example in the creation of implicit Large Eddy schemes, we introduce the Sparse Identification of Truncation Errors (SITE) framework to automatically identify the terms of the modified differential equation from simulation data. We build on recent advances in the field of data-driven discovery and control of complex systems and combine it with classical work on modified differential equation analysis of Warming, Hyett, Lerat and Peyret. We augment a sparse regression-rooted approach with appropriate preconditioning routines to aid in the identification of the individual modified differential equation terms. The construction of such a custom algorithm pipeline allows attenuating of multicollinearity effects as well as automatic tuning of the sparse regression hyperparameters using the Bayesian information criterion (BIC). As proof of concept, we constrain the analysis to finite difference schemes and leave other numerical schemes open for future inquiry. Test cases include the linear advection equation with a forward-time, backward-space discretization, the Burgers' equation with a MacCormack predictor-corrector scheme and the Korteweg-de Vries equation with a Zabusky and Kruska discretization scheme. Based on variation studies, we derive guidelines for the selection of discretization parameters, preconditioning approaches and sparse regression algorithms. The results showcase highly accurate predictions underlining the promise of SITE for the analysis and optimization of discretization schemes, where analytic derivation of modified differential equations is infeasible.Comment: 25 pages, 26 figures, 3 tables, submitted to the Journal of Computional Physics, "code available at https://github.com/tumaer/truncationerror", Stephan Thaler and Ludger Paehler share first authorshi

    Non-convex optimization for 3D point source localization using a rotating point spread function

    Get PDF
    We consider the high-resolution imaging problem of 3D point source image recovery from 2D data using a method based on point spread function (PSF) engineering. The method involves a new technique, recently proposed by S.~Prasad, based on the use of a rotating PSF with a single lobe to obtain depth from defocus. The amount of rotation of the PSF encodes the depth position of the point source. Applications include high-resolution single molecule localization microscopy as well as the problem addressed in this paper on localization of space debris using a space-based telescope. The localization problem is discretized on a cubical lattice where the coordinates of nonzero entries represent the 3D locations and the values of these entries the fluxes of the point sources. Finding the locations and fluxes of the point sources is a large-scale sparse 3D inverse problem. A new nonconvex regularization method with a data-fitting term based on Kullback-Leibler (KL) divergence is proposed for 3D localization for the Poisson noise model. In addition, we propose a new scheme of estimation of the source fluxes from the KL data-fitting term. Numerical experiments illustrate the efficiency and stability of the algorithms that are trained on a random subset of image data before being applied to other images. Our 3D localization algorithms can be readily applied to other kinds of depth-encoding PSFs as well.Comment: 28 page

    Generalised cellular neural networks (GCNNs) constructed using particle swarm optimisation for spatio-temporal evolutionary pattern identification

    Get PDF
    Particle swarm optimization (PSO) is introduced to implement a new constructive learning algorithm for training generalized cellular neural networks (GCNNs) for the identification of spatio-temporal evolutionary (STE) systems. The basic idea of the new PSO-based learning algorithm is to successively approximate the desired signal by progressively pursuing relevant orthogonal projections. This new algorithm will thus be referred to as the orthogonal projection pursuit (OPP) algorithm, which is in mechanism similar to the conventional projection pursuit approach. A novel two-stage hybrid training scheme is proposed for constructing a parsimonious GCNN model. In the first stage, the orthogonal projection pursuit algorithm is applied to adaptively and successively augment the network, where adjustable parameters of the associated units are optimized using a particle swarm optimizer. The resultant network model produced at the first stage may be redundant. In the second stage, a forward orthogonal regression (FOR) algorithm, aided by mutual information estimation, is applied to re. ne and improve the initially trained network. The effectiveness and performance of the proposed method is validated by applying the new modeling framework to a spatio-temporal evolutionary system identification problem

    Machine Learning Techniques and Applications For Ground-based Image Analysis

    Full text link
    Ground-based whole sky cameras have opened up new opportunities for monitoring the earth's atmosphere. These cameras are an important complement to satellite images by providing geoscientists with cheaper, faster, and more localized data. The images captured by whole sky imagers can have high spatial and temporal resolution, which is an important pre-requisite for applications such as solar energy modeling, cloud attenuation analysis, local weather prediction, etc. Extracting valuable information from the huge amount of image data by detecting and analyzing the various entities in these images is challenging. However, powerful machine learning techniques have become available to aid with the image analysis. This article provides a detailed walk-through of recent developments in these techniques and their applications in ground-based imaging. We aim to bridge the gap between computer vision and remote sensing with the help of illustrative examples. We demonstrate the advantages of using machine learning techniques in ground-based image analysis via three primary applications -- segmentation, classification, and denoising

    An alternating direction method of multipliers for inverse lithography problem

    Full text link
    We propose an alternating direction method of multipliers (ADMM) to solve an optimization problem stemming from inverse lithography. The objective functional of the optimization problem includes three terms: the misfit between the imaging on wafer and the target pattern, the penalty term which ensures the mask is binary and the total variation regularization term. By variable splitting, we introduce an augmented Lagrangian for the original objective functional. In the framework of ADMM method, the optimization problem is divided into several subproblems. Each of the subproblems can be solved efficiently. We give the convergence analysis of the proposed method. Specially, instead of solving the subproblem concerning sigmoid, we solve directly the threshold truncation imaging function which can be solved analytically. We also provide many numerical examples to illustrate the effectiveness of the method

    Point Spread Function Engineering for 3D Imaging of Space Debris using a Continuous Exact l0 Penalty (CEL0) Based Algorithm

    Full text link
    We consider three-dimensional (3D) localization and imaging of space debris from only one two-dimensional (2D) snapshot image. The technique involves an optical imager that exploits off-center image rotation to encode both the lateral and depth coordinates of point sources, with the latter being encoded in the angle of rotation of the PSF. We formulate 3D localization into a large-scale sparse 3D inverse problem in the discretized form. A recently developed penalty called continuous exact l0 (CEL0) is applied in this problem for the Gaussian noise model. Numerical experiments and comparisons illustrate the efficiency of the algorithm.Comment: 12 pages. arXiv admin note: substantial text overlap with arXiv:1809.10541, arXiv:1804.0400

    Learning to Optimize: A Primer and A Benchmark

    Full text link
    Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods, aiming at reducing the laborious iterations of hand engineering. It automates the design of an optimization method based on its performance on a set of training problems. This data-driven procedure generates methods that can efficiently solve problems similar to those in the training. In sharp contrast, the typical and traditional designs of optimization methods are theory-driven, so they obtain performance guarantees over the classes of problems specified by the theory. The difference makes L2O suitable for repeatedly solving a certain type of optimization problems over a specific distribution of data, while it typically fails on out-of-distribution problems. The practicality of L2O depends on the type of target optimization, the chosen architecture of the method to learn, and the training procedure. This new paradigm has motivated a community of researchers to explore L2O and report their findings. This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization. We set up taxonomies, categorize existing works and research directions, present insights, and identify open challenges. We also benchmarked many existing L2O approaches on a few but representative optimization problems. For reproducible research and fair benchmarking purposes, we released our software implementation and data in the package Open-L2O at https://github.com/VITA-Group/Open-L2O

    Structural adaptive anisotropic recursive filter for blind medical image deconvolution

    Get PDF
    Performance of radiographic diagnosis and therapeutic intervention heavily depends on the quality of acquired images. Over decades, a range of pre-processing for image enhancement has been explored. Among the most recent proposals is iterative blinded image deconvolution, which aims to identify the inheritant point spread function, degrading images during acquisition. Thus far, the technique has been known for its poor convergence and stability and was recently superseded by non-negativity and support constraints recursive image filtering. However, the latter requires a priori on intrinsic properties of imaging sensor, e.g., distribution, noise floor and field of view. Most importantly, since homogeneity assumption was implied by deconvolution, recovered degrading function was global, disregarding fidelity of underlying objects. This paper proposes a modified recursive filtering with similar non-negativity constraints, but also taking into account local anisotropic structure of content. The experiment reported herein demonstrates its superior convergence property, while also preserving crucial image feature
    • …
    corecore