1,118 research outputs found

    Motion-corrected Fourier ptychography

    Get PDF
    Fourier ptychography (FP) is a recently proposed computational imaging technique for high space-bandwidth product imaging. In real setups such as endoscope and transmission electron microscope, the common sample motion largely degrades the FP reconstruction and limits its practicability. In this paper, we propose a novel FP reconstruction method to efficiently correct for unknown sample motion. Specifically, we adaptively update the sample's Fourier spectrum from low spatial-frequency regions towards high spatial-frequency ones, with an additional motion recovery and phase-offset compensation procedure for each sub-spectrum. Benefiting from the phase retrieval redundancy theory, the required large overlap between adjacent sub-spectra offers an accurate guide for successful motion recovery. Experimental results on both simulated data and real captured data show that the proposed method can correct for unknown sample motion with its standard deviation being up to 10% of the field-of-view scale. We have released our source code for non-commercial use, and it may find wide applications in related FP platforms such as endoscopy and transmission electron microscopy

    Maximum A Posteriori Deconvolution of Sparse Spike Trains

    Get PDF

    Unsupervised bayesian convex deconvolution based on a field with an explicit partition function

    Full text link
    This paper proposes a non-Gaussian Markov field with a special feature: an explicit partition function. To the best of our knowledge, this is an original contribution. Moreover, the explicit expression of the partition function enables the development of an unsupervised edge-preserving convex deconvolution method. The method is fully Bayesian, and produces an estimate in the sense of the posterior mean, numerically calculated by means of a Monte-Carlo Markov Chain technique. The approach is particularly effective and the computational practicability of the method is shown on a simple simulated example

    Highly efficient Bayesian joint inversion for receiver-based data and its application to lithospheric structure beneath the southern Korean Peninsula

    Get PDF
    With the deployment of extensive seismic arrays, systematic and efficient parameter and uncertainty estimation is of increasing importance and can provide reliable, regional models for crustal and upper-mantle structure.We present an efficient Bayesian method for the joint inversion of surface-wave dispersion and receiver-function data that combines trans-dimensional (trans-D) model selection in an optimization phase with subsequent rigorous parameter uncertainty estimation. Parameter and uncertainty estimation depend strongly on the chosen parametrization such that meaningful regional comparison requires quantitative model selection that can be carried out efficiently at several sites. While significant progress has been made for model selection (e.g. trans-D inference) at individual sites, the lack of efficiency can prohibit application to large data volumes or cause questionable results due to lack of convergence. Studies that address large numbers of data sets have mostly ignored model selection in favour of more efficient/simple estimation techniques (i.e. focusing on uncertainty estimation but employing ad-hoc model choices). Our approach consists of a two-phase inversion that combines trans-D optimization to select the most probable parametrization with subsequent Bayesian sampling for uncertainty estimation given that parametrization. The trans-D optimization is implemented here by replacing the likelihood function with the Bayesian information criterion (BIC). The BIC provides constraints on model complexity that facilitate the search for an optimal parametrization. Parallel tempering (PT) is applied as an optimization algorithm. After optimization, the optimal model choice is identified by the minimum BIC value from all PT chains. Uncertainty estimation is then carried out in fixed dimension. Data errors are estimated as part of the inference problem by a combination of empirical and hierarchical estimation. Data covariance matrices are estimated from data residuals (the difference between prediction and observation) and periodically updated. In addition, a scaling factor for the covariance matrix magnitude is estimated as part of the inversion. The inversion is applied to both simulated and observed data that consist of phase- and group-velocity dispersion curves (Rayleigh wave), and receiver functions. The simulation results show that model complexity and important features are well estimated by the fixed dimensional posterior probability density. Observed data for stations in different tectonic regions of the southern Korean Peninsula are considered. The results are consistent with published results, but important features are better constrained than in previous regularized inversions and are more consistent across the stations. For example, resolution of crustal and Moho interfaces, and absolute values and gradients of velocities in lower crust and upper mantle are better constrained

    Wireless Channel Equalization in Digital Communication Systems

    Get PDF
    Our modern society has transformed to an information-demanding system, seeking voice, video, and data in quantities that could not be imagined even a decade ago. The mobility of communicators has added more challenges. One of the new challenges is to conceive highly reliable and fast communication system unaffected by the problems caused in the multipath fading wireless channels. Our quest is to remove one of the obstacles in the way of achieving ultimately fast and reliable wireless digital communication, namely Inter-Symbol Interference (ISI), the intensity of which makes the channel noise inconsequential. The theoretical background for wireless channels modeling and adaptive signal processing are covered in first two chapters of dissertation. The approach of this thesis is not based on one methodology but several algorithms and configurations that are proposed and examined to fight the ISI problem. There are two main categories of channel equalization techniques, supervised (training) and blind unsupervised (blind) modes. We have studied the application of a new and specially modified neural network requiring very short training period for the proper channel equalization in supervised mode. The promising performance in the graphs for this network is presented in chapter 4. For blind modes two distinctive methodologies are presented and studied. Chapter 3 covers the concept of multiple cooperative algorithms for the cases of two and three cooperative algorithms. The select absolutely larger equalized signal and majority vote methods have been used in 2-and 3-algoirithm systems respectively. Many of the demonstrated results are encouraging for further research. Chapter 5 involves the application of general concept of simulated annealing in blind mode equalization. A limited strategy of constant annealing noise is experimented for testing the simple algorithms used in multiple systems. Convergence to local stationary points of the cost function in parameter space is clearly demonstrated and that justifies the use of additional noise. The capability of the adding the random noise to release the algorithm from the local traps is established in several cases

    A Bayesian approach to initial model inference in cryo-electron microscopy

    Get PDF
    Eine Hauptanwendung der Einzelpartikel-Analyse in der Kryo-Elektronenmikroskopie ist die Charakterisierung der dreidimensionalen Struktur makromolekularer Komplexe. Dazu werden zehntausende Bilder verwendet, die verrauschte zweidimensionale Projektionen des Partikels zeigen. Im ersten Schritt werden ein niedrig aufgelöstetes Anfangsmodell rekonstruiert sowie die unbekannten Bildorientierungen geschätzt. Dies ist ein schwieriges inverses Problem mit vielen Unbekannten, einschließlich einer unbekannten Orientierung für jedes Projektionsbild. Ein gutes Anfangsmodell ist entscheidend für den Erfolg des anschließenden Verfeinerungsschrittes. Meine Dissertation stellt zwei neue Algorithmen zur Rekonstruktion eines Anfangsmodells in der Kryo-Elektronenmikroskopie vor, welche auf einer groben Darstellung der Elektronendichte basieren. Die beiden wesentlichen Beiträge meiner Arbeit sind zum einen das Modell, welches die Elektronendichte darstellt, und zum anderen die neuen Rekonstruktionsalgorithmen. Der erste Hauptbeitrag liegt in der Verwendung Gaußscher Mischverteilungen zur Darstellung von Elektrondichten im Rekonstruktionsschritt. Ich verwende kugelförmige Mischungskomponenten mit unbekannten Positionen, Ausdehnungen und Gewichtungen. Diese Darstellung hat viele Vorteile im Vergleich zu einer gitterbasierten Elektronendichte, die andere Rekonstruktionsalgorithmen üblicherweise verwenden. Zum Beispiel benötigt sie wesentlich weniger Parameter, was zu schnelleren und robusteren Algorithmen führt. Der zweite Hauptbeitrag ist die Entwicklung von Markovketten-Monte-Carlo-Verfahren im Rahmen eines Bayes'schen Ansatzes zur Schätzung der Modellparameter. Der erste Algorithmus kann aus dem Gibbs-Sampling, welches Gaußsche Mischverteilungen an Punktwolken anpasst, abgeleitet werden. Dieser Algorithmus wird hier so erweitert, dass er auch mit Bildern, Projektionen sowie unbekannten Drehungen und Verschiebungen funktioniert. Der zweite Algorithmus wählt einen anderen Zugang. Das Vorwärtsmodell nimmt nun Gaußsche Fehler an. Sampling-Algorithmen wie Hamiltonian Monte Carlo (HMC) erlauben es, die Positionen der Mischungskomponenten und die Bildorientierungen zu schätzen. Meine Dissertation zeigt umfassende numerische Experimente mit simulierten und echten Daten, die die vorgestellten Algorithmen in der Praxis testen und mit anderen Rekonstruktionsverfahren vergleichen.Single-particle cryo-electron microscopy (cryo-EM) is widely used to study the structure of macromolecular assemblies. Tens of thousands of noisy two-dimensional images of the macromolecular assembly viewed from different directions are used to infer its three-dimensional structure. The first step is to estimate a low-resolution initial model and initial image orientations. This is a challenging ill-posed inverse problem with many unknowns, including an unknown orientation for each two-dimensional image. Obtaining a good initial model is crucial for the success of the subsequent refinement step. In this thesis we introduce new algorithms for estimating an initial model in cryo-EM, based on a coarse representation of the electron density. The contribution of the thesis can be divided into these two parts: one relating to the model, and the other to the algorithms. The first main contribution of the thesis is using Gaussian mixture models to represent electron densities in reconstruction algorithms. We use spherical (isotropic) mixture components with unknown positions, size and weights. We show that using this representation offers many advantages over the traditional grid-based representation used by other reconstruction algorithms. There is for example a significant reduction in the number of parameters needed to represent the three-dimensional electron density, which leads to fast and robust algorithms. The second main contribution of the thesis is developing Markov Chain Monte Carlo (MCMC) algorithms within a Bayesian framework for estimating the parameters of the mixture models. The first algorithm is a Gibbs sampling algorithm. It is derived by starting with the standard Gibbs sampling algorithm for fitting Gaussian mixture models to point clouds, and extending it to work with images, to handle projections from three dimensions to two dimensions, and to account for unknown rotations and translations. The second algorithm takes a different approach. It modifies the forward model to work with Gaussian noise, and uses sampling algorithms such as Hamiltonian Monte Carlo (HMC) to sample the positions of the mixture components and the image orientations. We provide extensive tests of our algorithms using simulated and experimental data, and compare them to other initial model algorithms

    Iterative blind deconvolution and its application in characterization of eddy current NDE signals

    Get PDF
    Eddy current techniques are widely used to detect and characterize the defects in steam generator tubes in nuclear power plants. Although defect characterization is crucial for the successful inspection of defects, it is often difficult due to due to the finite size of the probes used for inspection. A feasible solution is to model the defect data as the convolution of the defect surface profile and the probe response. Therefore deconvolution algorithms can be used to remove the effect of probe on the signal. This thesis presents a method using iterative blind deconvolution algorithm based on the Richardson-Lucy algorithm to address the defect characterization problem. Another iterative blind deconvolution method based on Wiener filtering is used to compare the performance. A preprocessing algorithm is introduced to remove the noise and thus enhance the performance. Two new convergence criterions are proposed to solve the convergence problem. Different types of initial estimate of the PSF are used and their impacts on the performance are compared. The results of applying this method to the synthetic data, the calibration data and the field data are presented

    Analysis, Design, and Generalization of Electrochemical Impedance Spectroscopy (EIS) Inversion Algorithms

    Full text link
    We introduce a framework for analyzing and designing EIS inversion algorithms. Our framework stems from the observation of four features common to well-defined EIS inversion algorithms, namely (1) the representation of unknown distributions, (2) the minimization of a metric of error to estimate parameters arising from the chosen representation, subject to constraints on (3) the complexity control parameters, and (4) a means for choosing optimal control parameter values. These features must be present to overcome the ill-posed nature of EIS inversion problems. We review three established EIS inversion algorithms to illustrate the pervasiveness of these features, and show the utility of the framework by resolving ambiguities concerning three more algorithms. Our framework is then used to design the generalized EIS inversion (gEISi) algorithm, which uses Gaussian basis function representation, modality control parameter, and cross-validation for choosing the optimal control parameter value. The gEISi algorithm is applicable to the generalized EIS inversion problem, which allows for a wider range of underlying models. We also considered the construction of credible intervals for distributions arising from the algorithm. The algorithm is able to accurately reproduce distributions which have been difficult to obtain using existing algorithms. It is provided gratis on the repository https://github.com/suryaeff/gEISi.git.Comment: 46 pages, to be submitted to the Journal of the Electrochemical Societ

    An Examination of Some Signi cant Approaches to Statistical Deconvolution

    No full text
    We examine statistical approaches to two significant areas of deconvolution - Blind Deconvolution (BD) and Robust Deconvolution (RD) for stochastic stationary signals. For BD, we review some major classical and new methods in a unified framework of nonGaussian signals. The first class of algorithms we look at falls into the class of Minimum Entropy Deconvolution (MED) algorithms. We discuss the similarities between them despite differences in origins and motivations. We give new theoretical results concerning the behaviour and generality of these algorithms and give evidence of scenarios where they may fail. In some cases, we present new modifications to the algorithms to overcome these shortfalls. Following our discussion on the MED algorithms, we next look at a recently proposed BD algorithm based on the correntropy function, a function defined as a combination of the autocorrelation and the entropy functiosn. We examine its BD performance when compared with MED algorithms. We find that the BD carried out via correntropy-matching cannot be straightforwardly interpreted as simultaneous moment-matching due to the breakdown of the correntropy expansion in terms of moments. Other issues such as maximum/minimum phase ambiguity and computational complexity suggest that careful attention is required before establishing the correntropy algorithm as a superior alternative to the existing BD techniques. For the problem of RD, we give a categorisation of different kinds of uncertainties encountered in estimation and discuss techniques required to solve each individual case. Primarily, we tackle the overlooked cases of robustification of deconvolution filters based on estimated blurring response or estimated signal spectrum. We do this by utilising existing methods derived from criteria such as minimax MSE with imposed uncertainty bands and penalised MSE. In particular, we revisit the Modified Wiener Filter (MWF) which offers simplicity and flexibility in giving improved RDs to the standard plug-in Wiener Filter (WF)

    Tomographic imaging of combustion zones using tunable diode laser absorption spectroscopy (TDLAS)

    Get PDF
    This work concentrates on enabling the usage of a specific variant of tunable diode laser absorption spectroscopy (abbr. TDLAS) for tomogaphically reconstructing spatially varying temperature and concentrations of gases with as few reconstruction artifacts as possible. The specific variant of TDLAS used here is known as wavelength modulation with second harmonic detection (abbr. WMS-2f) which uses the wavelength dependent absorbance information of two different spectroscopic transitions to determine temperature and concentration values. Traditionally, WMS-2f has generally been applied to domains where temperature although unknown, was spatially largely invariant while concentration was constant and known to a reasonable approximation (_x0006_+/- 10% ). In case of unknown temperatures and concentrations with large variations in space such techniques do not hold good since TDLAS is a “line-of-sight” (LOS) technique. To alleviate this problem, computer tomographic methods were developed and used to convert LOS projection data measured using WMS-2f TDLAS into spatially resolved local measurements. These locally reconstructed measurements have been used to determine temperature and concentration of points inside the flame following a new temperature and concentration determination strategy for WMS-2f that was also developed for this work. Specifically, the vibrational transitions (in the 1.39 microns to 1.44 microns range) of water vapor (H2O) in an axi-symmetric laminar flame issuing from a standard flat flame burner (McKenna burner) was probed using telecom grade diode lasers. The temperature and concentration of water vapor inside this flame was reconstructed using axi-symmetric Abel de-convolution method. The two different sources of errors in Abel’s deconvolution - regularization errors and perturbation errors, were analyzed and strategies for their mitigation were discussed. Numerical studies also revealed the existence of a third kind of error - tomographic TDLAS artifact. For 2D tomography, studies showing the required number of views, number of rays per view, orientation of the view and the best possible algorithm were conducted. Finally, data from 1D tomography was extrapolated to 2D and reconstructions were benchmarked with the results of 1D tomography
    corecore