2,829 research outputs found
Image formation in synthetic aperture radio telescopes
Next generation radio telescopes will be much larger, more sensitive, have
much larger observation bandwidth and will be capable of pointing multiple
beams simultaneously. Obtaining the sensitivity, resolution and dynamic range
supported by the receivers requires the development of new signal processing
techniques for array and atmospheric calibration as well as new imaging
techniques that are both more accurate and computationally efficient since data
volumes will be much larger. This paper provides a tutorial overview of
existing image formation techniques and outlines some of the future directions
needed for information extraction from future radio telescopes. We describe the
imaging process from measurement equation until deconvolution, both as a
Fourier inversion problem and as an array processing estimation problem. The
latter formulation enables the development of more advanced techniques based on
state of the art array processing. We demonstrate the techniques on simulated
and measured radio telescope data.Comment: 12 page
Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 1
Background material and a systems analysis of a multifrequency aperture - synthesizing microwave radiometer system is presented. It was found that the system does not exhibit high performance because much of the available thermal power is not used in the construction of the image and because the image that can be formed has a resolution of only ten lines. An analysis of image reconstruction is given. The system is compared with conventional aperture synthesis systems
Blind deconvolution of medical ultrasound images: parametric inverse filtering approach
©2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2007.910179The problem of reconstruction of ultrasound images by means of blind deconvolution has long been recognized as one of the central problems in medical ultrasound imaging. In this paper, this problem is addressed via proposing a blind deconvolution method which is innovative in several ways. In particular, the method is based on parametric inverse filtering, whose parameters are optimized using two-stage processing. At the first stage, some partial information on the point spread function is recovered. Subsequently, this information is used to explicitly constrain the spectral shape of the inverse filter. From this perspective, the proposed methodology can be viewed as a ldquohybridizationrdquo of two standard strategies in blind deconvolution, which are based on either concurrent or successive estimation of the point spread function and the image of interest. Moreover, evidence is provided that the ldquohybridrdquo approach can outperform the standard ones in a number of important practical cases. Additionally, the present study introduces a different approach to parameterizing the inverse filter. Specifically, we propose to model the inverse transfer function as a member of a principal shift-invariant subspace. It is shown that such a parameterization results in considerably more stable reconstructions as compared to standard parameterization methods. Finally, it is shown how the inverse filters designed in this way can be used to deconvolve the images in a nonblind manner so as to further improve their quality. The usefulness and practicability of all the introduced innovations are proven in a series of both in silico and in vivo experiments. Finally, it is shown that the proposed deconvolution algorithms are capable of improving the resolution of ultrasound images by factors of 2.24 or 6.52 (as judged by the autocorrelation criterion) depending on the type of regularization method used
Digital Signal Processing
Contains an introduction and reports on seventeen research projects.U.S. Navy - Office of Naval Research (Contract N00014-77-C-0266)Amoco Foundation FellowshipU.S. Navy - Office of Naval Research (Contract N00014-81-K-0742)National Science Foundation (Grant ECS80-07102)U.S. Army Research Office (Contract DAAG29-81-K-0073)Hughes Aircraft Company FellowshipAmerican Edwards Labs. GrantWhitaker Health Sciences FundPfeiffer Foundation GrantSchlumberger-Doll Research Center FellowshipGovernment of Pakistan ScholarshipU.S. Navy - Office of Naval Research (Contract N00014-77-C-0196)National Science Foundation (Grant ECS79-15226)Hertz Foundation Fellowshi
Bayesian astrostatistics: a backward look to the future
This perspective chapter briefly surveys: (1) past growth in the use of
Bayesian methods in astrophysics; (2) current misconceptions about both
frequentist and Bayesian statistical inference that hinder wider adoption of
Bayesian methods by astronomers; and (3) multilevel (hierarchical) Bayesian
modeling as a major future direction for research in Bayesian astrostatistics,
exemplified in part by presentations at the first ISI invited session on
astrostatistics, commemorated in this volume. It closes with an intentionally
provocative recommendation for astronomical survey data reporting, motivated by
the multilevel Bayesian perspective on modeling cosmic populations: that
astronomers cease producing catalogs of estimated fluxes and other source
properties from surveys. Instead, summaries of likelihood functions (or
marginal likelihood functions) for source properties should be reported (not
posterior probability density functions), including nontrivial summaries (not
simply upper limits) for candidate objects that do not pass traditional
detection thresholds.Comment: 27 pp, 4 figures. A lightly revised version of a chapter in
"Astrostatistical Challenges for the New Astronomy" (Joseph M. Hilbe, ed.,
Springer, New York, forthcoming in 2012), the inaugural volume for the
Springer Series in Astrostatistics. Version 2 has minor clarifications and an
additional referenc
Image Restoration
This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with
Constrained least-squares digital image restoration
The design of a digital image restoration filter must address four concerns: the completeness of the underlying imaging system model, the validity of the restoration metric used to derive the filter, the computational efficiency of the algorithm for computing the filter values and the ability to apply the filter in the spatial domain. Consistent with these four concerns, this dissertation presents a constrained least-squares (CLS) restoration filter for digital image restoration. The CLS restoration filter is based on a comprehensive, continuous-input/discrete- processing/continuous-output (c/d/c) imaging system model that accounts for acquisition blur, spatial sampling, additive noise and imperfect image reconstruction. The c/d/c model-based CLS restoration filter can be applied rigorously and is easier to compute than the corresponding c/d/c model-based Wiener restoration filter. The CLS restoration filter can be efficiently implemented in the spatial domain as a small convolution kernel. Simulated restorations are used to illustrate the CLS filter\u27s performance for a range of imaging conditions. Restoration studies based, in part, on an actual Forward Looking Infrared (FLIR) imaging system, show that the CLS restoration filter can be used for effective range reduction. The CLS restoration filter is also successfully tested on blurred and noisy radiometric images of the earth\u27s outgoing radiation field from a satellite-borne scanning radiometer used by the National Aeronautics and Space Administration (NASA) for atmospheric research
Advanced maximum entropy approaches for medical and microscopy imaging
The maximum entropy framework is a cornerstone of statistical inference, which is employed at a growing rate for constructing models capable of describing and predicting biological systems, particularly complex ones, from empirical datasets.â In these high-yield applications, determining exact probability distribution functions with only minimal information about data characteristics and without utilizing human subjectivity is of particular interest. In this thesis, an automated procedure of this kind for univariate and bivariate data is employed to reach this objective through combining the maximum entropy method with an appropriate optimization method. The only necessary characteristics of random variables are their continuousness and ability to be approximated as independent and identically distributed. In this work, we try to concisely present two numerical probabilistic algorithms and apply them to estimate the univariate and bivariate models of the available data.
In the first case, a combination of the maximum entropy method, Newton's method, and the Bayesian maximum a posteriori approach leads to the estimation of the kinetic parameters with arterial input functions (AIFs) in cases without any measurement of the AIF. âThe results shows that the AIF can reliably be determined from the data of dynamic contrast enhanced-magnetic resonance imaging (DCE-MRI) by maximum entropy method. Then, kinetic parameters can be obtained. By using the developed method, a good data fitting and thus a more accurate prediction of the kinetic parameters are achieved, which, in turn, leads to a more reliable application of DCE-MRI.
â
In the bivariate case, we consider colocalization as a quantitative analysis in fluorescence microscopy imaging. The method proposed in this case is obtained by combining the Maximum Entropy Method (MEM) and a Gaussian Copula, which we call the Maximum Entropy Copula (MEC). This novel method is capable of measuring the spatial and nonlinear correlation of signals to obtain the colocalization of markers in fluorescence microscopy images. Based on the results, MEC is able to specify co- and anti-colocalization even in high-background situations.â âThe main point here is that determining the joint distribution via its marginals is an important inverse problem which has one possible unique solution in case of choosing an proper copula according to Sklar's theorem. This developed combination of Gaussian copula and the univariate maximum entropy marginal distribution enables the determination of a unique bivariate distribution. Therefore, a colocalization parameter can be obtained via Kendallâs t, which is commonly employed in the copula literature.
In general, the importance of applying these algorithms to biological data is attributed to the higher accuracy, faster computing rate, and lower cost of solutions in comparison to those of others. The extensive application and success of these algorithms in various contexts depend on their conceptual plainness and mathematical validity. â
Afterward, a probability density is estimated via enhancing trial cumulative distribution functions iteratively, in which more appropriate estimations are quantified using a scoring function that recognizes irregular fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian criterion. Uncertainty induced by statistical fluctuations in random samples is reflected by multiple estimates for the probability density. In addition, as a useful diagnostic for visualizing the quality of the estimated probability densities, scaled quantile residual plots are introduced. Kullback--Leibler divergence is an appropriate measure to indicate the convergence of estimations for the probability density function (PDF) to the actual PDF as sample. The findings indicate the general applicability of this method to high-yield statistical inference.Die Methode der maximalen Entropie ist ein wichtiger Bestandteil der statistischen Inferenz, die in immer stĂ€rkerem MaĂe fĂŒr die Konstruktion von Modellen verwendet wird, die biologische Systeme, insbesondere komplexe Systeme, aus empirischen DatensĂ€tzen beschreiben und vorhersagen können. In diesen ertragreichen Anwendungen ist es von besonderem Interesse, exakte Verteilungsfunktionen mit minimaler Information ĂŒber die Eigenschaften der Daten und ohne Ausnutzung menschlicher SubjektivitĂ€t zu bestimmen. In dieser Arbeit wird durch eine Kombination der Maximum-Entropie-Methode mit geeigneten Optimierungsverfahren ein automatisiertes Verfahren verwendet, um dieses Ziel fĂŒr univariate und bivariate Daten zu erreichen. Notwendige Eigenschaften von Zufallsvariablen sind lediglich ihre Stetigkeit und ihre Approximierbarkeit als unabhĂ€ngige und identisch verteilte Variablen. In dieser Arbeit versuchen wir, zwei numerische probabilistische Algorithmen prĂ€zise zu prĂ€sentieren und sie zur SchĂ€tzung der univariaten und bivariaten Modelle der zur VerfĂŒgung stehenden Daten anzuwenden.
ZunĂ€chst wird mit einer Kombination aus der Maximum-Entropie Methode, der Newton-Methode und dem Bayes'schen Maximum-A-Posteriori-Ansatz die SchĂ€tzung der kinetischen Parameter mit arteriellen Eingangsfunktionen (AIFs) in FĂ€llen ohne Messung der AIF ermöglicht. Die Ergebnisse zeigen, dass die AIF aus den Daten der dynamischen kontrastverstĂ€rkten Magnetresonanztomographie (DCE-MRT) mit der Maximum-Entropie-Methode zuverlĂ€ssig bestimmt werden kann. AnschlieĂend können die kinetischen Parameter gewonnen werden. Durch die Anwendung der entwickelten Methode wird eine gute Datenanpassung und damit eine genauere Vorhersage der kinetischen Parameter erreicht, was wiederum zu einer zuverlĂ€ssigeren Anwendung der DCE-MRT fĂŒhrt.
Im bivariaten Fall betrachten wir die Kolokalisierung zur quantitativen Analyse in der Fluoreszenzmikroskopie-Bildgebung. Die in diesem Fall vorgeschlagene Methode ergibt sich aus der Kombination der Maximum-Entropie-Methode (MEM) und einer GauĂschen Copula, die wir Maximum-Entropie-Copula (MEC) nennen. Mit dieser neuartigen Methode kann die rĂ€umliche und nichtlineare Korrelation von Signalen gemessen werden, um die Kolokalisierung von Markern in Bildern der Fluoreszenzmikroskopie zu erhalten. Das Ergebnis zeigt, dass MEC in der Lage ist, die Ko- und Antikolokalisation auch in Situationen mit hohem Grundrauschen zu bestimmen. Der wesentliche Punkt hierbei ist, dass die Bestimmung der gemeinsamen Verteilung ĂŒber ihre Marginale ein entscheidendes inverses Problem ist, das eine mögliche eindeutige Lösung im Falle der Wahl einer geeigneten Copula gemÀà dem Satz von Sklar hat. Diese neu entwickelte Kombination aus GauĂscher Kopula und der univariaten Maximum Entropie Randverteilung ermöglicht die Bestimmung einer eindeutigen bivariaten Verteilung. Daher kann ein Kolokalisationsparameter ĂŒber Kendall's t ermittelt werden, der ĂŒblicherweise in der Copula-Literatur verwendet wird.
Die Bedeutung der Anwendung dieser Algorithmen auf biologische Daten lĂ€sst sich im Allgemeinen mit hoher Genauigkeit, schnellerer Rechengesch windigkeit und geringeren Kosten im Vergleich zu anderen Lösungen begrĂŒnden. Die umfassende Anwendung und der Erfolg dieser Algorithmen in verschiedenen Kontexten hĂ€ngen von ihrer konzeptionellen Eindeutigkeit und mathematischen GĂŒltigkeit ab.
AnschlieĂend wird eine Wahrscheinlichkeitsdichte durch iterative Erweiterung von kumulativen Verteilungsfunktionen geschĂ€tzt, wobei die geeignetsten SchĂ€tzungen mit einer Scoring-Funktion quantifiziert werden, um unregelmĂ€Ăige Schwankungen zu erkennen. Dieses Kriterium verhindert eine Unter- oder Ăberanpassung der Daten als Alternative zur Verwendung des Bayes-Kriteriums. Die durch statistische Schwankungen in Stichproben induzierte Unsicherheit wird durch mehrfache SchĂ€tzungen fĂŒr die Wahrscheinlichkeitsdichte berĂŒcksichtigt. ZusĂ€tzlich werden als nĂŒtzliche Diagnostik zur Visualisierung der QualitĂ€t der geschĂ€tzten Wahrscheinlichkeitsdichten skalierte Quantil-Residuen-Diagramme eingefĂŒhrt. Die Kullback-Leibler-Divergenz ist ein geeignetes MaĂ, um die Konvergenz der SchĂ€tzungen fĂŒr die Wahrscheinlichkeitsdichtefunktion (PDF) zu der tatsĂ€chlichen PDF als Stichprobe anzuzeigen. Die Ergebnisse zeigen die generelle Anwendbarkeit dieser Methode fĂŒr statistische Inferenz mit hohem Ertrag.
- âŠ