2,324 research outputs found

    Bayesian inference for inverse problems

    Get PDF
    Traditionally, the MaxEnt workshops start by a tutorial day. This paper summarizes my talk during 2001'th workshop at John Hopkins University. The main idea in this talk is to show how the Bayesian inference can naturally give us all the necessary tools we need to solve real inverse problems: starting by simple inversion where we assume to know exactly the forward model and all the input model parameters up to more realistic advanced problems of myopic or blind inversion where we may be uncertain about the forward model and we may have noisy data. Starting by an introduction to inverse problems through a few examples and explaining their ill posedness nature, I briefly presented the main classical deterministic methods such as data matching and classical regularization methods to show their limitations. I then presented the main classical probabilistic methods based on likelihood, information theory and maximum entropy and the Bayesian inference framework for such problems. I show that the Bayesian framework, not only generalizes all these methods, but also gives us natural tools, for example, for inferring the uncertainty of the computed solutions, for the estimation of the hyperparameters or for handling myopic or blind inversion problems. Finally, through a deconvolution problem example, I presented a few state of the art methods based on Bayesian inference particularly designed for some of the mass spectrometry data processing problems.Comment: Presented at MaxEnt01. To appear in Bayesian Inference and Maximum Entropy Methods, B. Fry (Ed.), AIP Proceedings. 20pages, 13 Postscript figure

    Weak gravitational lensing with DEIMOS

    Get PDF
    We introduce a novel method for weak-lensing measurements, which is based on a mathematically exact deconvolution of the moments of the apparent brightness distribution of galaxies from the telescope's PSF. No assumptions on the shape of the galaxy or the PSF are made. The (de)convolution equations are exact for unweighted moments only, while in practice a compact weight function needs to be applied to the noisy images to ensure that the moment measurement yields significant results. We employ a Gaussian weight function, whose centroid and ellipticity are iteratively adjusted to match the corresponding quantities of the source. The change of the moments caused by the application of the weight function can then be corrected by considering higher-order weighted moments of the same source. Because of the form of the deconvolution equations, even an incomplete weighting correction leads to an excellent shear estimation if galaxies and PSF are measured with a weight function of identical size. We demonstrate the accuracy and capabilities of this new method in the context of weak gravitational lensing measurements with a set of specialized tests and show its competitive performance on the GREAT08 challenge data. A complete C++ implementation of the method can be requested from the authors.Comment: 7 pages, 3 figures, fixed typo in Eq. 1

    Calibration Challenges for Future Radio Telescopes

    Full text link
    Instruments for radio astronomical observations have come a long way. While the first telescopes were based on very large dishes and 2-antenna interferometers, current instruments consist of dozens of steerable dishes, whereas future instruments will be even larger distributed sensor arrays with a hierarchy of phased array elements. For such arrays to provide meaningful output (images), accurate calibration is of critical importance. Calibration must solve for the unknown antenna gains and phases, as well as the unknown atmospheric and ionospheric disturbances. Future telescopes will have a large number of elements and a large field of view. In this case the parameters are strongly direction dependent, resulting in a large number of unknown parameters even if appropriately constrained physical or phenomenological descriptions are used. This makes calibration a daunting parameter estimation task, that is reviewed from a signal processing perspective in this article.Comment: 12 pages, 7 figures, 20 subfigures The title quoted in the meta-data is the title after release / final editing

    Exploiting flow dynamics for super-resolution in contrast-enhanced ultrasound

    Get PDF
    Ultrasound localization microscopy offers new radiation-free diagnostic tools for vascular imaging deep within the tissue. Sequential localization of echoes returned from inert microbubbles with low-concentration within the bloodstream reveal the vasculature with capillary resolution. Despite its high spatial resolution, low microbubble concentrations dictate the acquisition of tens of thousands of images, over the course of several seconds to tens of seconds, to produce a single super-resolved image. %since each echo is required to be well separated from adjacent microbubbles. Such long acquisition times and stringent constraints on microbubble concentration are undesirable in many clinical scenarios. To address these restrictions, sparsity-based approaches have recently been developed. These methods reduce the total acquisition time dramatically, while maintaining good spatial resolution in settings with considerable microbubble overlap. %Yet, non of the reported methods exploit the fact that microbubbles actually flow within the bloodstream. % to improve recovery. Here, we further improve sparsity-based super-resolution ultrasound imaging by exploiting the inherent flow of microbubbles and utilize their motion kinematics. While doing so, we also provide quantitative measurements of microbubble velocities. Our method relies on simultaneous tracking and super-localization of individual microbubbles in a frame-by-frame manner, and as such, may be suitable for real-time implementation. We demonstrate the effectiveness of the proposed approach on both simulations and {\it in-vivo} contrast enhanced human prostate scans, acquired with a clinically approved scanner.Comment: 11 pages, 9 figure

    The IPAC Image Subtraction and Discovery Pipeline for the intermediate Palomar Transient Factory

    Get PDF
    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, "bogus" candidates from processing artifacts and imperfect image subtractions outnumber real transients by ~ 10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of ~ 97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.Comment: 66 pages, 21 figures, 7 tables, accepted by PAS

    KiDS-i-800: Comparing weak gravitational lensing measurements in same-sky surveys

    Get PDF
    We present a weak gravitational lensing analysis of 815 square degree of ii-band imaging from the Kilo-Degree Survey (KiDS-ii-800). In contrast to the deep rr-band observations, which take priority during excellent seeing conditions and form the primary KiDS dataset (KiDS-rr-450), the complementary yet shallower KiDS-ii-800 spans a wide range of observing conditions. The overlapping KiDS-ii-800 and KiDS-rr-450 imaging therefore provides a unique opportunity to assess the robustness of weak lensing measurements. In our analysis, we introduce two new `null' tests. The `nulled' two-point shear correlation function uses a matched catalogue to show that the calibrated KiDS-ii-800 and KiDS-rr-450 shear measurements agree at the level of 1±41 \pm 4\%. We use five galaxy lens samples to determine a `nulled' galaxy-galaxy lensing signal from the full KiDS-ii-800 and KiDS-rr-450 surveys and find that the measurements agree to 7±57 \pm 5\% when the KiDS-ii-800 source redshift distribution is calibrated using either spectroscopic redshifts, or the 30-band photometric redshifts from the COSMOS survey.Comment: 24 pages, 20 figures. Submitted to MNRAS. Comments welcom

    Improving PSF modelling for weak gravitational lensing using new methods in model selection

    Full text link
    A simple theoretical framework for the description and interpretation of spatially correlated modelling residuals is presented, and the resulting tools are found to provide a useful aid to model selection in the context of weak gravitational lensing. The description is focused upon the specific problem of modelling the spatial variation of a telescope point spread function (PSF) across the instrument field of view, a crucial stage in lensing data analysis, but the technique may be used to rank competing models wherever data are described empirically. As such it may, with further development, provide useful extra information when used in combination with existing model selection techniques such as the Akaike and Bayesian Information Criteria, or the Bayesian evidence. Two independent diagnostic correlation functions are described and the interpretation of these functions demonstrated using a simulated PSF anisotropy field. The efficacy of these diagnostic functions as an aid to the correct choice of empirical model is then demonstrated by analyzing results for a suite of Monte Carlo simulations of random PSF fields with varying degrees of spatial structure, and it is shown how the diagnostic functions can be related to requirements for precision cosmic shear measurement. The limitations of the technique, and opportunities for improvements and applications to fields other than weak gravitational lensing, are discussed.Comment: 18 pages, 12 figures. Modified to match version accepted for publication in MNRA

    Characterizing 51 Eri b from 1-5 μ\mum: a partly-cloudy exoplanet

    Full text link
    We present spectro-photometry spanning 1-5 μ\mum of 51 Eridani b, a 2-10 MJup_\text{Jup} planet discovered by the Gemini Planet Imager Exoplanet Survey. In this study, we present new K1K1 (1.90-2.19 μ\mum) and K2K2 (2.10-2.40 μ\mum) spectra taken with the Gemini Planet Imager as well as an updated LPL_P (3.76 μ\mum) and new MSM_S (4.67 μ\mum) photometry from the NIRC2 Narrow camera. The new data were combined with JJ (1.13-1.35 μ\mum) and HH (1.50-1.80 μ\mum) spectra from the discovery epoch with the goal of better characterizing the planet properties. 51 Eri b photometry is redder than field brown dwarfs as well as known young T-dwarfs with similar spectral type (between T4-T8) and we propose that 51 Eri b might be in the process of undergoing the transition from L-type to T-type. We used two complementary atmosphere model grids including either deep iron/silicate clouds or sulfide/salt clouds in the photosphere, spanning a range of cloud properties, including fully cloudy, cloud free and patchy/intermediate opacity clouds. Model fits suggest that 51 Eri b has an effective temperature ranging between 605-737 K, a solar metallicity, a surface gravity of log\log(g) = 3.5-4.0 dex, and the atmosphere requires a patchy cloud atmosphere to model the SED. From the model atmospheres, we infer a luminosity for the planet of -5.83 to -5.93 (logL/L\log L/L_{\odot}), leaving 51 Eri b in the unique position as being one of the only directly imaged planet consistent with having formed via cold-start scenario. Comparisons of the planet SED against warm-start models indicates that the planet luminosity is best reproduced by a planet formed via core accretion with a core mass between 15 and 127 M_{\oplus}.Comment: 27 pages, 19 figures, Accepted for publication in The Astronomical Journa
    corecore