14,706 research outputs found

    The SENSE-Isomorphism Theoretical Image Voxel Estimation (SENSE-ITIVE) Model for Reconstruction and Observing Statistical Properties of Reconstruction Operators

    Get PDF
    The acquisition of sub-sampled data from an array of receiver coils has become a common means of reducing data acquisition time in MRI. Of the various techniques used in parallel MRI, SENSitivity Encoding (SENSE) is one of the most common, making use of a complex-valued weighted least squares estimation to unfold the aliased images. It was recently shown in Bruce et al. [Magn. Reson. Imag. 29(2011):1267-1287] that when the SENSE model is represented in terms of a real-valued isomorphism,it assumes a skew-symmetric covariance between receiver coils, as well as an identity covariance structure between voxels. In this manuscript, we show that not only is the skew-symmetric coil covariance unlike that of real data, but the estimated covariance structure between voxels over a time series of experimental data is not an identity matrix. As such, a new model, entitled SENSE-ITIVE, is described with both revised coil and voxel covariance structures. Both the SENSE and SENSE-ITIVE models are represented in terms of real-valued isomorphisms, allowing for a statistical analysis of reconstructed voxel means, variances, and correlations resulting from the use of different coil and voxel covariance structures used in the reconstruction processes to be conducted. It is shown through both theoretical and experimental illustrations that the miss-specification of the coil and voxel covariance structures in the SENSE model results in a lower standard deviation in each voxel of the reconstructed images, and thus an artificial increase in SNR, compared to the standard deviation and SNR of the SENSE-ITIVE model where both the coil and voxel covariances are appropriately accounted for. It is also shown that there are differences in the correlations induced by the reconstruction operations of both models, and consequently there are differences in the correlations estimated throughout the course of reconstructed time series. These differences in correlations could result in meaningful differences in interpretation of results

    De-aliasing Undersampled Volume Images for Visualization

    Get PDF
    We present and illustrate a new technique, Image Correlation Supersampling (ICS), for resampling volume data that are undersampled in one dimension. The resulting data satisfies the sampling theorem, and, therefore, many visualization algorithms that assume the theorem is satisfied can be applied to the data. Without the supersampling the visualization algorithms create artifacts due to aliasing. The assumptions made in developing the algorithm are often satisfied by data that is undersampled temporally. Through this supersampling we can completely characterize phenomena with measurements at a coarser temporal sampling rate than would otherwise be necessary. This can save acquisition time and storage space, permit the study of faster phenomena, and allow their study without introducing aliasing artifacts. The resampling technique relies on a priori knowledge of the measured phenomenon, and applies, in particular, to scalar concentration measurements of fluid flow. Because of the characteristics of fluid flow, an image deformation that takes each slice image to the next can be used to calculate intermediate slice images at arbitrarily fine spacing. We determine the deformation with an automatic, multi-resolution algorithm

    Tree-structured complementary filter banks using all-pass sections

    Get PDF
    Tree-structured complementary filter banks are developed with transfer functions that are simultaneously all-pass complementary and power complementary. Using a formulation based on unitary transforms and all-pass functions, we obtain analysis and synthesis filter banks which are related through a transposition operation, such that the cascade of analysis and synthesis filter banks achieves an all-pass function. The simplest structure is obtained using a Hadamard transform, which is shown to correspond to a binary tree structure. Tree structures can be generated for a variety of other unitary transforms as well. In addition, given a tree-structured filter bank where the number of bands is a power of two, simple methods are developed to generate complementary filter banks with an arbitrary number of channels, which retain the transpose relationship between analysis and synthesis banks, and allow for any combination of bandwidths. The structural properties of the filter banks are illustrated with design examples, and multirate applications are outlined

    HYDRA: Hybrid Deep Magnetic Resonance Fingerprinting

    Get PDF
    Purpose: Magnetic resonance fingerprinting (MRF) methods typically rely on dictio-nary matching to map the temporal MRF signals to quantitative tissue parameters. Such approaches suffer from inherent discretization errors, as well as high computational complexity as the dictionary size grows. To alleviate these issues, we propose a HYbrid Deep magnetic ResonAnce fingerprinting approach, referred to as HYDRA. Methods: HYDRA involves two stages: a model-based signature restoration phase and a learning-based parameter restoration phase. Signal restoration is implemented using low-rank based de-aliasing techniques while parameter restoration is performed using a deep nonlocal residual convolutional neural network. The designed network is trained on synthesized MRF data simulated with the Bloch equations and fast imaging with steady state precession (FISP) sequences. In test mode, it takes a temporal MRF signal as input and produces the corresponding tissue parameters. Results: We validated our approach on both synthetic data and anatomical data generated from a healthy subject. The results demonstrate that, in contrast to conventional dictionary-matching based MRF techniques, our approach significantly improves inference speed by eliminating the time-consuming dictionary matching operation, and alleviates discretization errors by outputting continuous-valued parameters. We further avoid the need to store a large dictionary, thus reducing memory requirements. Conclusions: Our approach demonstrates advantages in terms of inference speed, accuracy and storage requirements over competing MRF method

    Retrieving shallow shear-wave velocity profiles from 2D seismic-reflection data with severely aliased surface waves

    Get PDF
    The inversion of surface-wave phase-velocity dispersion curves provides a reliable method to derive near-surface shear-wave velocity profiles. In this work, we invert phase-velocity dispersion curves estimated from 2D seismic-reflection data. These data cannot be used to image the first 50 m with seismic-reflection processing techniques due to the presence of indistinct first breaks and significant NMO-stretching of the shallow reflections. A surface-wave analysis was proposed to derive information about the near surface in order to complement the seismic-reflection stacked sections, which are satisfactory for depths between 50 and 700 m. In order to perform the analysis, we had to overcome some problems, such as the short acquisition time and the large receiver spacing, which resulted in severe spatial aliasing. The analysis consists of spatial partitioning of each line in segments, picking of the phase-velocity dispersion curves for each segment in the f-k domain, and inversion of the picked curves using the neighborhood algorithm. The spatial aliasing is successfully circumvented by continuously tracking the surface-wave modal curves in the f-k domain. This enables us to sample the curves up to a frequency of 40 Hz, even though most components beyond 10 Hz are spatially aliased. The inverted 2D VS sections feature smooth horizontal layers, and a sensitivity analysis yields a penetration depth of 20–25 m. The results suggest that long profiles may be more efficiently surveyed by using a large receiver separation and dealing with the spatial aliasing in the described way, rather than ensuring that no spatially aliased surface waves are acquired.Fil: Onnis, Luciano Emanuel. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Ciudad Universitaria. Instituto de FĂ­sica de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de FĂ­sica de Buenos Aires; Argentina. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de FĂ­sica; ArgentinaFil: Osella, Ana Maria. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Ciudad Universitaria. Instituto de FĂ­sica de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de FĂ­sica de Buenos Aires; Argentina. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de FĂ­sica; ArgentinaFil: Carcione, Jose M.. Istituto Nazionale di Oceanografia e di Geofisica Sperimentale; Itali

    Data-flow Analysis of Programs with Associative Arrays

    Full text link
    Dynamic programming languages, such as PHP, JavaScript, and Python, provide built-in data structures including associative arrays and objects with similar semantics-object properties can be created at run-time and accessed via arbitrary expressions. While a high level of security and safety of applications written in these languages can be of a particular importance (consider a web application storing sensitive data and providing its functionality worldwide), dynamic data structures pose significant challenges for data-flow analysis making traditional static verification methods both unsound and imprecise. In this paper, we propose a sound and precise approach for value and points-to analysis of programs with associative arrays-like data structures, upon which data-flow analyses can be built. We implemented our approach in a web-application domain-in an analyzer of PHP code.Comment: In Proceedings ESSS 2014, arXiv:1405.055

    An Intercomparison Between Divergence-Cleaning and Staggered Mesh Formulations for Numerical Magnetohydrodynamics

    Full text link
    In recent years, several different strategies have emerged for evolving the magnetic field in numerical MHD. Some of these methods can be classified as divergence-cleaning schemes, where one evolves the magnetic field components just like any other variable in a higher order Godunov scheme. The fact that the magnetic field is divergence-free is imposed post-facto via a divergence-cleaning step. Other schemes for evolving the magnetic field rely on a staggered mesh formulation which is inherently divergence-free. The claim has been made that the two approaches are equivalent. In this paper we cross-compare three divergence-cleaning schemes based on scalar and vector divergence-cleaning and a popular divergence-free scheme. All schemes are applied to the same stringent test problem. Several deficiencies in all the divergence-cleaning schemes become clearly apparent with the scalar divergence-cleaning schemes performing worse than the vector divergence-cleaning scheme. The vector divergence-cleaning scheme also shows some deficiencies relative to the staggered mesh divergence-free scheme. The differences can be explained by realizing that all the divergence-cleaning schemes are based on a Poisson solver which introduces a non-locality into the scheme, though other subtler points of difference are also catalogued. By using several diagnostics that are routinely used in the study of turbulence, it is shown that the differences in the schemes produce measurable differences in physical quantities that are of interest in such studies
    • 

    corecore