378 research outputs found

    Mask-guided Spatial-Temporal Graph Neural Network for Multi-frequency Electrical Impedance Tomography

    Get PDF

    Image Recovery Using Partitioned-Separable Paraboloidal Surrogate Coordinate Ascent Algorithms

    Full text link
    Iterative coordinate ascent algorithms have been shown to be useful for image recovery, but are poorly suited to parallel computing due to their sequential nature. This paper presents a new fast converging parallelizable algorithm for image recovery that can be applied to a very broad class of objective functions. This method is based on paraboloidal surrogate functions and a concavity technique. The paraboloidal surrogates simplify the optimization problem. The idea of the concavity technique is to partition pixels into subsets that can be updated in parallel to reduce the computation time. For fast convergence, pixels within each subset are updated sequentially using a coordinate ascent algorithm. The proposed algorithm is guaranteed to monotonically increase the objective function and intrinsically accommodates nonnegativity constraints. A global convergence proof is summarized. Simulation results show that the proposed algorithm requires less elapsed time for convergence than iterative coordinate ascent algorithms. With four parallel processors, the proposed algorithm yields a speedup factor of 3.77 relative to single processor coordinate ascent algorithms for a three-dimensional (3-D) confocal image restoration problem.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86024/1/Fessler72.pd

    A New Statistical Reconstruction Method for the Computed Tomography Using an X-Ray Tube with Flying Focal Spot

    Get PDF
    Abstract This paper presents a new image reconstruction method for spiral cone- beam tomography scanners in which an X-ray tube with a flying focal spot is used. The method is based on principles related to the statistical model-based iterative reconstruction (MBIR) methodology. The proposed approach is a continuous-to-continuous data model approach, and the forward model is formulated as a shift-invariant system. This allows for avoiding a nutating reconstruction-based approach, e.g. the advanced single slice rebinning methodology (ASSR) that is usually applied in computed tomography (CT) scanners with X-ray tubes with a flying focal spot. In turn, the proposed approach allows for significantly accelerating the reconstruction processing and, generally, for greatly simplifying the entire reconstruction procedure. Additionally, it improves the quality of the reconstructed images in comparison to the traditional algorithms, as confirmed by extensive simulations. It is worth noting that the main purpose of introducing statistical reconstruction methods to medical CT scanners is the reduction of the impact of measurement noise on the quality of tomography images and, consequently, the dose reduction of X-ray radiation absorbed by a patient. A series of computer simulations followed by doctor's assessments have been performed, which indicate how great a reduction of the absorbed dose can be achieved using the reconstruction approach presented here

    Electrical resistance tomography imaging of concrete

    Get PDF

    Acceleration Methods for MRI

    Full text link
    Acceleration methods are a critical area of research for MRI. Two of the most important acceleration techniques involve parallel imaging and compressed sensing. These advanced signal processing techniques have the potential to drastically reduce scan times and provide radiologists with new information for diagnosing disease. However, many of these new techniques require solving difficult optimization problems, which motivates the development of more advanced algorithms to solve them. In addition, acceleration methods have not reached maturity in some applications, which motivates the development of new models tailored to these applications. This dissertation makes advances in three different areas of accelerations. The first is the development of a new algorithm (called B1-Based, Adaptive Restart, Iterative Soft Thresholding Algorithm or BARISTA), that solves a parallel MRI optimization problem with compressed sensing assumptions. BARISTA is shown to be 2-3 times faster and more robust to parameter selection than current state-of-the-art variable splitting methods. The second contribution is the extension of BARISTA ideas to non-Cartesian trajectories that also leads to a 2-3 times acceleration over previous methods. The third contribution is the development of a new model for functional MRI that enables a 3-4 factor of acceleration of effective temporal resolution in functional MRI scans. Several variations of the new model are proposed, with an ROC curve analysis showing that a combination low-rank/sparsity model giving the best performance in identifying the resting-state motor network.PhDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120841/1/mmuckley_1.pd

    Application of constrained optimisation techniques in electrical impedance tomography

    Get PDF
    A Constrained Optimisation technique is described for the reconstruction of temporal resistivity images. The approach solves the Inverse problem by optimising a cost function under constraints, in the form of normalised boundary potentials. Mathematical models have been developed for two different data collection methods for the chosen criterion. Both of these models express the reconstructed image in terms of one dimensional (I-D) Lagrange multiplier functions. The reconstruction problem becomes one of estimating these 1-D functions from the normalised boundary potentials. These models are based on a cost criterion of the minimisation of the variance between the reconstructed resistivity distribution and the true resistivity distribution. The methods presented In this research extend the algorithms previously developed for X-ray systems. Computational efficiency is enhanced by exploiting the structure of the associated system matrices. The structure of the system matrices was preserved in the Electrical Impedance Tomography (EIT) implementations by applying a weighting due to non-linear current distribution during the backprojection of the Lagrange multiplier functions. In order to obtain the best possible reconstruction it is important to consider the effects of noise in the boundary data. This is achieved by using a fast algorithm which matches the statistics of the error in the approximate inverse of the associated system matrix with the statistics of the noise error in the boundary data. This yields the optimum solution with the available boundary data. Novel approaches have been developed to produce the Lagrange multiplier functions. Two alternative methods are given for the design of VLSI implementations of hardware accelerators to improve computational efficiencies. These accelerators are designed to implement parallel geometries and are modelled using a verification description language to assess their performance capabilities

    Advanced maximum entropy approaches for medical and microscopy imaging

    Get PDF
    The maximum entropy framework is a cornerstone of statistical inference, which is employed at a growing rate for constructing models capable of describing and predicting biological systems, particularly complex ones, from empirical datasets.‎ In these high-yield applications, determining exact probability distribution functions with only minimal information about data characteristics and without utilizing human subjectivity is of particular interest. In this thesis, an automated procedure of this kind for univariate and bivariate data is employed to reach this objective through combining the maximum entropy method with an appropriate optimization method. The only necessary characteristics of random variables are their continuousness and ability to be approximated as independent and identically distributed. In this work, we try to concisely present two numerical probabilistic algorithms and apply them to estimate the univariate and bivariate models of the available data. In the first case, a combination of the maximum entropy method, Newton's method, and the Bayesian maximum a posteriori approach leads to the estimation of the kinetic parameters with arterial input functions (AIFs) in cases without any measurement of the AIF. ‎The results shows that the AIF can reliably be determined from the data of dynamic contrast enhanced-magnetic resonance imaging (DCE-MRI) by maximum entropy method. Then, kinetic parameters can be obtained. By using the developed method, a good data fitting and thus a more accurate prediction of the kinetic parameters are achieved, which, in turn, leads to a more reliable application of DCE-MRI. ‎ In the bivariate case, we consider colocalization as a quantitative analysis in fluorescence microscopy imaging. The method proposed in this case is obtained by combining the Maximum Entropy Method (MEM) and a Gaussian Copula, which we call the Maximum Entropy Copula (MEC). This novel method is capable of measuring the spatial and nonlinear correlation of signals to obtain the colocalization of markers in fluorescence microscopy images. Based on the results, MEC is able to specify co- and anti-colocalization even in high-background situations.‎ ‎The main point here is that determining the joint distribution via its marginals is an important inverse problem which has one possible unique solution in case of choosing an proper copula according to Sklar's theorem. This developed combination of Gaussian copula and the univariate maximum entropy marginal distribution enables the determination of a unique bivariate distribution. Therefore, a colocalization parameter can be obtained via Kendall’s t, which is commonly employed in the copula literature. In general, the importance of applying these algorithms to biological data is attributed to the higher accuracy, faster computing rate, and lower cost of solutions in comparison to those of others. The extensive application and success of these algorithms in various contexts depend on their conceptual plainness and mathematical validity. ‎ Afterward, a probability density is estimated via enhancing trial cumulative distribution functions iteratively, in which more appropriate estimations are quantified using a scoring function that recognizes irregular fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian criterion. Uncertainty induced by statistical fluctuations in random samples is reflected by multiple estimates for the probability density. In addition, as a useful diagnostic for visualizing the quality of the estimated probability densities, scaled quantile residual plots are introduced. Kullback--Leibler divergence is an appropriate measure to indicate the convergence of estimations for the probability density function (PDF) to the actual PDF as sample. The findings indicate the general applicability of this method to high-yield statistical inference.Die Methode der maximalen Entropie ist ein wichtiger Bestandteil der statistischen Inferenz, die in immer stĂ€rkerem Maße fĂŒr die Konstruktion von Modellen verwendet wird, die biologische Systeme, insbesondere komplexe Systeme, aus empirischen DatensĂ€tzen beschreiben und vorhersagen können. In diesen ertragreichen Anwendungen ist es von besonderem Interesse, exakte Verteilungsfunktionen mit minimaler Information ĂŒber die Eigenschaften der Daten und ohne Ausnutzung menschlicher SubjektivitĂ€t zu bestimmen. In dieser Arbeit wird durch eine Kombination der Maximum-Entropie-Methode mit geeigneten Optimierungsverfahren ein automatisiertes Verfahren verwendet, um dieses Ziel fĂŒr univariate und bivariate Daten zu erreichen. Notwendige Eigenschaften von Zufallsvariablen sind lediglich ihre Stetigkeit und ihre Approximierbarkeit als unabhĂ€ngige und identisch verteilte Variablen. In dieser Arbeit versuchen wir, zwei numerische probabilistische Algorithmen prĂ€zise zu prĂ€sentieren und sie zur SchĂ€tzung der univariaten und bivariaten Modelle der zur VerfĂŒgung stehenden Daten anzuwenden. ZunĂ€chst wird mit einer Kombination aus der Maximum-Entropie Methode, der Newton-Methode und dem Bayes'schen Maximum-A-Posteriori-Ansatz die SchĂ€tzung der kinetischen Parameter mit arteriellen Eingangsfunktionen (AIFs) in FĂ€llen ohne Messung der AIF ermöglicht. Die Ergebnisse zeigen, dass die AIF aus den Daten der dynamischen kontrastverstĂ€rkten Magnetresonanztomographie (DCE-MRT) mit der Maximum-Entropie-Methode zuverlĂ€ssig bestimmt werden kann. Anschließend können die kinetischen Parameter gewonnen werden. Durch die Anwendung der entwickelten Methode wird eine gute Datenanpassung und damit eine genauere Vorhersage der kinetischen Parameter erreicht, was wiederum zu einer zuverlĂ€ssigeren Anwendung der DCE-MRT fĂŒhrt. Im bivariaten Fall betrachten wir die Kolokalisierung zur quantitativen Analyse in der Fluoreszenzmikroskopie-Bildgebung. Die in diesem Fall vorgeschlagene Methode ergibt sich aus der Kombination der Maximum-Entropie-Methode (MEM) und einer Gaußschen Copula, die wir Maximum-Entropie-Copula (MEC) nennen. Mit dieser neuartigen Methode kann die rĂ€umliche und nichtlineare Korrelation von Signalen gemessen werden, um die Kolokalisierung von Markern in Bildern der Fluoreszenzmikroskopie zu erhalten. Das Ergebnis zeigt, dass MEC in der Lage ist, die Ko- und Antikolokalisation auch in Situationen mit hohem Grundrauschen zu bestimmen. Der wesentliche Punkt hierbei ist, dass die Bestimmung der gemeinsamen Verteilung ĂŒber ihre Marginale ein entscheidendes inverses Problem ist, das eine mögliche eindeutige Lösung im Falle der Wahl einer geeigneten Copula gemĂ€ĂŸ dem Satz von Sklar hat. Diese neu entwickelte Kombination aus Gaußscher Kopula und der univariaten Maximum Entropie Randverteilung ermöglicht die Bestimmung einer eindeutigen bivariaten Verteilung. Daher kann ein Kolokalisationsparameter ĂŒber Kendall's t ermittelt werden, der ĂŒblicherweise in der Copula-Literatur verwendet wird. Die Bedeutung der Anwendung dieser Algorithmen auf biologische Daten lĂ€sst sich im Allgemeinen mit hoher Genauigkeit, schnellerer Rechengesch windigkeit und geringeren Kosten im Vergleich zu anderen Lösungen begrĂŒnden. Die umfassende Anwendung und der Erfolg dieser Algorithmen in verschiedenen Kontexten hĂ€ngen von ihrer konzeptionellen Eindeutigkeit und mathematischen GĂŒltigkeit ab. Anschließend wird eine Wahrscheinlichkeitsdichte durch iterative Erweiterung von kumulativen Verteilungsfunktionen geschĂ€tzt, wobei die geeignetsten SchĂ€tzungen mit einer Scoring-Funktion quantifiziert werden, um unregelmĂ€ĂŸige Schwankungen zu erkennen. Dieses Kriterium verhindert eine Unter- oder Überanpassung der Daten als Alternative zur Verwendung des Bayes-Kriteriums. Die durch statistische Schwankungen in Stichproben induzierte Unsicherheit wird durch mehrfache SchĂ€tzungen fĂŒr die Wahrscheinlichkeitsdichte berĂŒcksichtigt. ZusĂ€tzlich werden als nĂŒtzliche Diagnostik zur Visualisierung der QualitĂ€t der geschĂ€tzten Wahrscheinlichkeitsdichten skalierte Quantil-Residuen-Diagramme eingefĂŒhrt. Die Kullback-Leibler-Divergenz ist ein geeignetes Maß, um die Konvergenz der SchĂ€tzungen fĂŒr die Wahrscheinlichkeitsdichtefunktion (PDF) zu der tatsĂ€chlichen PDF als Stichprobe anzuzeigen. Die Ergebnisse zeigen die generelle Anwendbarkeit dieser Methode fĂŒr statistische Inferenz mit hohem Ertrag.
    • 

    corecore