15,450 research outputs found

    Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Get PDF
    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows

    A Statistical Model for Simultaneous Template Estimation, Bias Correction, and Registration of 3D Brain Images

    Full text link
    Template estimation plays a crucial role in computational anatomy since it provides reference frames for performing statistical analysis of the underlying anatomical population variability. While building models for template estimation, variability in sites and image acquisition protocols need to be accounted for. To account for such variability, we propose a generative template estimation model that makes simultaneous inference of both bias fields in individual images, deformations for image registration, and variance hyperparameters. In contrast, existing maximum a posterori based methods need to rely on either bias-invariant similarity measures or robust image normalization. Results on synthetic and real brain MRI images demonstrate the capability of the model to capture heterogeneity in intensities and provide a reliable template estimation from registration

    Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI

    Get PDF
    Background: Prostate cancer is one of the most common forms of cancer found in males making early diagnosis important. Magnetic resonance imaging (MRI) has been useful in visualizing and localizing tumor candidates and with the use of endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The coils introduce intensity inhomogeneities and the surface coil intensity correction built into MRI scanners is used to reduce these inhomogeneities. However, the correction typically performed at the MRI scanner level leads to noise amplification and noise level variations. Methods: In this study, we introduce a new Monte Carlo-based noise compensation approach for coil intensity corrected endorectal MRI which allows for effective noise compensation and preservation of details within the prostate. The approach accounts for the ERC SNR profile via a spatially-adaptive noise model for correcting non-stationary noise variations. Such a method is useful particularly for improving the image quality of coil intensity corrected endorectal MRI data performed at the MRI scanner level and when the original raw data is not available. Results: SNR and contrast-to-noise ratio (CNR) analysis in patient experiments demonstrate an average improvement of 11.7 dB and 11.2 dB respectively over uncorrected endorectal MRI, and provides strong performance when compared to existing approaches. Conclusions: A new noise compensation method was developed for the purpose of improving the quality of coil intensity corrected endorectal MRI data performed at the MRI scanner level. We illustrate that promising noise compensation performance can be achieved for the proposed approach, which is particularly important for processing coil intensity corrected endorectal MRI data performed at the MRI scanner level and when the original raw data is not available.Comment: 23 page

    Improving the applicability of radar rainfall estimates for urban pluvial flood modelling and forecasting

    Get PDF
    This work explores the possibility of improving the applicability of radar rainfall estimates (whose accuracy is generally insufficient) to the verification and operation of urban storm-water drainage models by employing a number of local gauge-based radar rainfall adjustment techniques. The adjustment techniques tested in this work include a simple mean-field bias (MFB) adjustment, as well as a more complex Bayesian radar-raingauge data merging method which aims at better preserving the spatial structure of rainfall fields. In addition, a novel technique (namely, local singularity analysis) is introduced and shown to improve the Bayesian method by better capturing and reproducing storm patterns and peaks. Two urban catchments were used as case studies in this work: the Cranbrook catchment (9 km2) in North-East London, and the Portobello catchment (53 km2) in the East of Edinburgh. In the former, the potential benefits of gauge-based adjusted radar rainfall estimates in an operational context were analysed, whereas in the latter the potential benefits of adjusted estimates for model verification purposes were explored. Different rainfall inputs, including raingauge, original radar and the aforementioned merged estimates were fed into the urban drainage models of the two catchments. The hydraulic outputs were compared against available flow and depth records. On the whole, the tested adjustment techniques proved to improve the applicability of radar rainfall estimates to urban hydrological applications, with the Bayesian-based methods, in particular the singularity sensitive one, providing more realistic and accurate rainfall fields which result in better reproduction of the urban drainage system’s dynamics. Further testing is still necessary in order to better assess the benefits of these adjustment methods, identify their shortcomings and improve them accordingly

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    D3^3PO - Denoising, Deconvolving, and Decomposing Photon Observations

    Full text link
    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the NIFTY package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 x 32 arcmin^2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components.Comment: 22 pages, 8 figures, 2 tables, accepted by Astronomy & Astrophysics; refereed version, 1 figure added, results unchanged, software available at http://www.mpa-garching.mpg.de/ift/d3po

    Modeling and inference of multisubject fMRI data

    Get PDF
    Functional magnetic resonance imaging (fMRI) is a rapidly growing technique for studying the brain in action. Since its creation [1], [2], cognitive scientists have been using fMRI to understand how we remember, manipulate, and act on information in our environment. Working with magnetic resonance physicists, statisticians, and engineers, these scientists are pushing the frontiers of knowledge of how the human brain works. The design and analysis of single-subject fMRI studies has been well described. For example, [3], chapters 10 and 11 of [4], and chapters 11 and 14 of [5] all give accessible overviews of fMRI methods for one subject. In contrast, while the appropriate manner to analyze a group of subjects has been the topic of several recent papers, we do not feel it has been covered well in introductory texts and review papers. Therefore, in this article, we bring together old and new work on so-called group modeling of fMRI data using a consistent notation to make the methods more accessible and comparable

    A Bayesian Heteroscedastic GLM with Application to fMRI Data with Motion Spikes

    Full text link
    We propose a voxel-wise general linear model with autoregressive noise and heteroscedastic noise innovations (GLMH) for analyzing functional magnetic resonance imaging (fMRI) data. The model is analyzed from a Bayesian perspective and has the benefit of automatically down-weighting time points close to motion spikes in a data-driven manner. We develop a highly efficient Markov Chain Monte Carlo (MCMC) algorithm that allows for Bayesian variable selection among the regressors to model both the mean (i.e., the design matrix) and variance. This makes it possible to include a broad range of explanatory variables in both the mean and variance (e.g., time trends, activation stimuli, head motion parameters and their temporal derivatives), and to compute the posterior probability of inclusion from the MCMC output. Variable selection is also applied to the lags in the autoregressive noise process, making it possible to infer the lag order from the data simultaneously with all other model parameters. We use both simulated data and real fMRI data from OpenfMRI to illustrate the importance of proper modeling of heteroscedasticity in fMRI data analysis. Our results show that the GLMH tends to detect more brain activity, compared to its homoscedastic counterpart, by allowing the variance to change over time depending on the degree of head motion
    • …
    corecore