1,264 research outputs found

    Accelerated High-Resolution Photoacoustic Tomography via Compressed Sensing

    Get PDF
    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue. A particular example is the planar Fabry-Perot (FP) scanner, which yields high-resolution images but takes several minutes to sequentially map the photoacoustic field on the sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: First, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP scanner and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in-vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction methods that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of PAT scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.Comment: submitted to "Physics in Medicine and Biology

    Two-dimensional magnetotelluric inversion of blocky geoelectrical structures

    Get PDF
    Journal ArticleThis paper demonstrates that there are alternative approaches to the magnetotelluric (MT) inverse problem solution based on different types of geoelectrical models. The traditional approach uses smooth models to describe the conductivity distribution in underground formations. In this paper, we present a new approach, based on approximating the geology by models with blocky conductivity structures. We can select one or another class of inverse models by choosing between different stabilizing functionals in the regularization method. The final decision, w hose approach may be used for the specific MT data set, is made on the basis of available geological information. This paper describes a new way of stabilizing two-dimensional MT inversion using a minimum support functional and shows the improvement that it provides over traditional methods for geoelectrical models with blocky structures. The new method is applied to MT data collected for crustal imaging in the Carrizo Plain in California and to MT data collected for mining exploration by INCO Exploration

    Regularization strategy for the layered inversion of airborne TEM data: application to VTEM data acquired over the basin of Franceville (Gabon)

    Full text link
    Airborne transient electromagnetic (TEM) is a cost-effective method to image the distribution of electrical conductivity in the ground. We consider layered earth inversion to interpret large data sets of hundreds of kilometre. Different strategies can be used to solve this inverse problem. This consists in managing the a priori information to avoid the mathematical instability and provide the most plausible model of conductivity in depth. In order to obtain fast and realistic inversion program, we tested three kinds of regularization: two are based on standard Tikhonov procedure which consist in minimizing not only the data misfit function but a balanced optimization function with additional terms constraining the lateral and the vertical smoothness of the conductivity; another kind of regularization is based on reducing the condition number of the kernel by changing the layout of layers before minimizing the data misfit function. Finally, in order to get a more realistic distribution of conductivity, notably by removing negative conductivity values, we suggest an additional recursive filter based upon the inversion of the logarithm of the conductivity. All these methods are tested on synthetic and real data sets. Synthetic data have been calculated by 2.5D modelling; they are used to demonstrate that these methods provide equivalent quality in terms of data misfit and accuracy of the resulting image; the limit essentially comes on special targets with sharp 2D geometries. The real data case is from Helicopter-borne TEM data acquired in the basin of Franceville (Gabon) where borehole conductivity loggings are used to show the good accuracy of the inverted models in most areas, and some biased depths in areas where strong lateral changes may occur

    Multilevel Approach For Signal Restoration Problems With Toeplitz Matrices

    Get PDF
    We present a multilevel method for discrete ill-posed problems arising from the discretization of Fredholm integral equations of the first kind. In this method, we use the Haar wavelet transform to define restriction and prolongation operators within a multigrid-type iteration. The choice of the Haar wavelet operator has the advantage of preserving matrix structure, such as Toeplitz, between grids, which can be exploited to obtain faster solvers on each level where an edge-preserving Tikhonov regularization is applied. Finally, we present results that indicate the promise of this approach for restoration of signals and images with edges

    A Computationally Efficient Tool for Assessing the Depth Resolution in Potential-Field Inversion

    Get PDF
    In potential-field inversion problems, it can be difficult to obtain reliable information about the source distribution with respect to depth. Moreover, spatial resolution of the solution decreases with depth, and in fact the more ill-posed the problem – and the more noisy the data – the less reliable the depth information. Based on early work in depth resolution, defined in terms of the singular value decomposition, we introduce a tool APPROXDRP which uses an approximation of the singular vectors obtained by the iterative Lanczos bidiagonalization algorithm, making it well suited for large-scale problems. This tool allows a computational/visual analysis of how much the depth resolution in a computational potential-field inversion problem can be obtained from the given data.We show that when used in combination with a plot of the approximate SVD quantities, APPROXDRP may successfully show the limitations of depth resolution resulting from noise in the data. This allows a reliable analysis of the retrievable depth information and effectively guides the user in choosing the optimal number of iterations, for a given problem

    Linear Precoding Based on Polynomial Expansion: Large-Scale Multi-Cell MIMO Systems

    Full text link
    Large-scale MIMO systems can yield a substantial improvement in spectral efficiency for future communication systems. Due to the finer spatial resolution achieved by a huge number of antennas at the base stations, these systems have shown to be robust to inter-user interference and the use of linear precoding is asymptotically optimal. However, most precoding schemes exhibit high computational complexity as the system dimensions increase. For example, the near-optimal RZF requires the inversion of a large matrix. This motivated our companion paper, where we proposed to solve the issue in single-cell multi-user systems by approximating the matrix inverse by a truncated polynomial expansion (TPE), where the polynomial coefficients are optimized to maximize the system performance. We have shown that the proposed TPE precoding with a small number of coefficients reaches almost the performance of RZF but never exceeds it. In a realistic multi-cell scenario involving large-scale multi-user MIMO systems, the optimization of RZF precoding has thus far not been feasible. This is mainly attributed to the high complexity of the scenario and the non-linear impact of the necessary regularizing parameters. On the other hand, the scalar weights in TPE precoding give hope for possible throughput optimization. Following the same methodology as in the companion paper, we exploit random matrix theory to derive a deterministic expression for the asymptotic SINR for each user. We also provide an optimization algorithm to approximate the weights that maximize the network-wide weighted max-min fairness. The optimization weights can be used to mimic the user throughput distribution of RZF precoding. Using simulations, we compare the network throughput of the TPE precoding with that of the suboptimal RZF scheme and show that our scheme can achieve higher throughput using a TPE order of only 3
    corecore