254 research outputs found

    Image reconstruction in optical interferometry: Benchmarking the regularization

    Full text link
    With the advent of infrared long-baseline interferometers with more than two telescopes, both the size and the completeness of interferometric data sets have significantly increased, allowing images based on models with no a priori assumptions to be reconstructed. Our main objective is to analyze the multiple parameters of the image reconstruction process with particular attention to the regularization term and the study of their behavior in different situations. The secondary goal is to derive practical rules for the users. Using the Multi-aperture image Reconstruction Algorithm (MiRA), we performed multiple systematic tests, analyzing 11 regularization terms commonly used. The tests are made on different astrophysical objects, different (u,v) plane coverages and several signal-to-noise ratios to determine the minimal configuration needed to reconstruct an image. We establish a methodology and we introduce the mean-square errors (MSE) to discuss the results. From the ~24000 simulations performed for the benchmarking of image reconstruction with MiRA, we are able to classify the different regularizations in the context of the observations. We find typical values of the regularization weight. A minimal (u,v) coverage is required to reconstruct an acceptable image, whereas no limits are found for the studied values of the signal-to-noise ratio. We also show that super-resolution can be achieved with increasing performance with the (u,v) coverage filling. Using image reconstruction with a sufficient (u,v) coverage is shown to be reliable. The choice of the main parameters of the reconstruction is tightly constrained. We recommend that efforts to develop interferometric infrastructures should first concentrate on the number of telescopes to combine, and secondly on improving the accuracy and sensitivity of the arrays.Comment: 15 pages, 16 figures; accepted in A&

    Generalized Forward-Backward Splitting

    Full text link
    This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form F+i=1nGiF + \sum_{i=1}^n G_i, where FF has a Lipschitz-continuous gradient and the GiG_i's are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than n=1n = 1 non-smooth function, our method generalizes it to the case of arbitrary nn. Our method makes an explicit use of the regularity of FF in the forward step, and the proximity operators of the GiG_i's are applied in parallel in the backward step. This allows the generalized forward backward to efficiently address an important class of convex problems. We prove its convergence in infinite dimension, and its robustness to errors on the computation of the proximity operators and of the gradient of FF. Examples on inverse problems in imaging demonstrate the advantage of the proposed methods in comparison to other splitting algorithms.Comment: 24 pages, 4 figure

    Non-convex regularization in remote sensing

    Get PDF
    In this paper, we study the effect of different regularizers and their implications in high dimensional image classification and sparse linear unmixing. Although kernelization or sparse methods are globally accepted solutions for processing data in high dimensions, we present here a study on the impact of the form of regularization used and its parametrization. We consider regularization via traditional squared (2) and sparsity-promoting (1) norms, as well as more unconventional nonconvex regularizers (p and Log Sum Penalty). We compare their properties and advantages on several classification and linear unmixing tasks and provide advices on the choice of the best regularizer for the problem at hand. Finally, we also provide a fully functional toolbox for the community.Comment: 11 pages, 11 figure

    Vast volatility matrix estimation for high-frequency financial data

    Full text link
    High-frequency data observed on the prices of financial assets are commonly modeled by diffusion processes with micro-structure noise, and realized volatility-based methods are often used to estimate integrated volatility. For problems involving a large number of assets, the estimation objects we face are volatility matrices of large size. The existing volatility estimators work well for a small number of assets but perform poorly when the number of assets is very large. In fact, they are inconsistent when both the number, pp, of the assets and the average sample size, nn, of the price data on the pp assets go to infinity. This paper proposes a new type of estimators for the integrated volatility matrix and establishes asymptotic theory for the proposed estimators in the framework that allows both nn and pp to approach to infinity. The theory shows that the proposed estimators achieve high convergence rates under a sparsity assumption on the integrated volatility matrix. The numerical studies demonstrate that the proposed estimators perform well for large pp and complex price and volatility models. The proposed method is applied to real high-frequency financial data.Comment: Published in at http://dx.doi.org/10.1214/09-AOS730 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Improvement of image quality of time-domain diffuse optical tomography with lp sparsity regularization

    Get PDF
    An lp (0 < p ≤ 1) sparsity regularization is applied to time-domain diffuse optical tomography with a gradient-based nonlinear optimization scheme to improve the spatial resolution and the robustness to noise. The expression of the lp sparsity regularization is reformulated as a differentiable function of a parameter to avoid the difficulty in calculating its gradient in the optimization process. The regularization parameter is selected by the L-curve method. Numerical experiments show that the lp sparsity regularization improves the spatial resolution and recovers the difference in the absorption coefficients between two targets, although a target with a small absorption coefficient may disappear due to the strong effect of the lp sparsity regularization when the value of p is too small. The lp sparsity regularization with small p values strongly localizes the target, and the reconstructed region of the target becomes smaller as the value of p decreases. A phantom experiment validates the numerical simulations

    Non-line-of-sight reconstruction via structure sparsity regularization

    Full text link
    Non-line-of-sight (NLOS) imaging allows for the imaging of objects around a corner, which enables potential applications in various fields such as autonomous driving, robotic vision, medical imaging, security monitoring, etc. However, the quality of reconstruction is challenged by low signal-noise-ratio (SNR) measurements. In this study, we present a regularization method, referred to as structure sparsity (SS) regularization, for denoising in NLOS reconstruction. By exploiting the prior knowledge of structure sparseness, we incorporate nuclear norm penalization into the cost function of directional light-cone transform (DLCT) model for NLOS imaging system. This incorporation effectively integrates the neighborhood information associated with the directional albedo, thereby facilitating the denoising process. Subsequently, the reconstruction is achieved by optimizing a directional albedo model with SS regularization using fast iterative shrinkage-thresholding algorithm. Notably, the robust reconstruction of occluded objects is observed. Through comprehensive evaluations conducted on both synthetic and experimental datasets, we demonstrate that the proposed approach yields high-quality reconstructions, surpassing the state-of-the-art reconstruction algorithms, especially in scenarios involving short exposure and low SNR measurements.Comment: 8 pages, 5 figure

    A fast patch-dictionary method for whole image recovery

    Full text link
    Various algorithms have been proposed for dictionary learning. Among those for image processing, many use image patches to form dictionaries. This paper focuses on whole-image recovery from corrupted linear measurements. We address the open issue of representing an image by overlapping patches: the overlapping leads to an excessive number of dictionary coefficients to determine. With very few exceptions, this issue has limited the applications of image-patch methods to the local kind of tasks such as denoising, inpainting, cartoon-texture decomposition, super-resolution, and image deblurring, for which one can process a few patches at a time. Our focus is global imaging tasks such as compressive sensing and medical image recovery, where the whole image is encoded together, making it either impossible or very ineffective to update a few patches at a time. Our strategy is to divide the sparse recovery into multiple subproblems, each of which handles a subset of non-overlapping patches, and then the results of the subproblems are averaged to yield the final recovery. This simple strategy is surprisingly effective in terms of both quality and speed. In addition, we accelerate computation of the learned dictionary by applying a recent block proximal-gradient method, which not only has a lower per-iteration complexity but also takes fewer iterations to converge, compared to the current state-of-the-art. We also establish that our algorithm globally converges to a stationary point. Numerical results on synthetic data demonstrate that our algorithm can recover a more faithful dictionary than two state-of-the-art methods. Combining our whole-image recovery and dictionary-learning methods, we numerically simulate image inpainting, compressive sensing recovery, and deblurring. Our recovery is more faithful than those of a total variation method and a method based on overlapping patches

    Multi-scale Mining of fMRI data with Hierarchical Structured Sparsity

    Get PDF
    International audienceInverse inference, or "brain reading", is a recent paradigm for analyzing functional magnetic resonance imaging (fMRI) data, based on pattern recognition and statistical learning. By predicting some cognitive variables related to brain activation maps, this approach aims at decoding brain activity. Inverse inference takes into account the multivariate information between voxels and is currently the only way to assess how precisely some cognitive information is encoded by the activity of neural populations within the whole brain. However, it relies on a prediction function that is plagued by the curse of dimensionality, since there are far more features than samples, i.e., more voxels than fMRI volumes. To address this problem, different methods have been proposed, such as, among others, univariate feature selection, feature agglomeration and regularization techniques. In this paper, we consider a sparse hierarchical structured regularization. Specifically, the penalization we use is constructed from a tree that is obtained by spatially-constrained agglomerative clustering. This approach encodes the spatial structure of the data at different scales into the regularization, which makes the overall prediction procedure more robust to inter-subject variability. The regularization used induces the selection of spatially coherent predictive brain regions simultaneously at different scales. We test our algorithm on real data acquired to study the mental representation of objects, and we show that the proposed algorithm not only delineates meaningful brain regions but yields as well better prediction accuracy than reference methods

    Non-line-of-sight imaging with arbitrary illumination and detection pattern

    Full text link
    Non-line-of-sight (NLOS) imaging aims at reconstructing targets obscured from the direct line of sight. Existing NLOS imaging algorithms require dense measurements at rectangular grid points in a large area of the relay surface, which severely hinders their availability to variable relay scenarios in practical applications such as robotic vision, autonomous driving, rescue operations and remote sensing. In this work, we propose a Bayesian framework for NLOS imaging with no specific requirements on the spatial pattern of illumination and detection points. By introducing virtual confocal signals, we design a confocal complemented signal-object collaborative regularization (CC-SOCR) algorithm for high quality reconstructions. Our approach is capable of reconstructing both albedo and surface normal of the hidden objects with fine details under the most general relay setting. Moreover, with a regular relay surface, coarse rather than dense measurements are enough for our approach such that the acquisition time can be reduced significantly. As demonstrated in multiple experiments, the new framework substantially enhances the applicability of NLOS imaging.Comment: main article: 32 pages with 8 figures; supplementary information: 49 pages with 26 figure
    corecore