1,178 research outputs found

    Deconvolution‐based distortion correction of EPI using analytic single‐voxel point‐spread functions

    Get PDF
    Purpose To develop a postprocessing algorithm that corrects geometric distortions due to spatial variations of the static magnetic field amplitude, B0, and effects from relaxation during signal acquisition in EPI. Theory and Methods An analytic, complex point‐spread function is deduced for k‐space trajectories of EPI variants and applied to corresponding acquisitions in a resolution phantom and in human volunteers at 3 T. With the analytic point‐spread function and experimental maps of B0 (and, optionally, the effective transverse relaxation time, urn:x-wiley:07403194:media:mrm28591:mrm28591-math-0004) as input, a point‐spread function matrix operator is devised for distortion correction by a Thikonov‐regularized deconvolution in image space. The point‐spread function operator provides additional information for an appropriate correction of the signal intensity distribution. A previous image combination algorithm for acquisitions with opposite phase blip polarities is adapted to the proposed method to recover destructively interfering signal contributions. Results Applications of the proposed deconvolution‐based distortion correction (“DecoDisCo”) algorithm demonstrate excellent distortion corrections and superior performance regarding the recovery of an undistorted intensity distribution in comparison to a multifrequency reconstruction. Examples include full and partial Fourier standard EPI scans as well as double‐shot center‐out trajectories. Compared with other distortion‐correction approaches, DecoDisCo permits additional deblurring to obtain sharper images in cases of significant urn:x-wiley:07403194:media:mrm28591:mrm28591-math-0005 effects. Conclusion Robust distortion corrections in EPI acquisitions are feasible with high quality by regularized deconvolution with an analytic point‐spread function. The general algorithm, which is publicly released on GitHub, can be straightforwardly adapted for specific EPI variants or other acquisition schemes

    Mitigating susceptibility-induced distortions in high-resolution 3DEPI fMRI at 7T

    Get PDF
    Geometric distortion is a major limiting factor for spatial specificity in high-resolution fMRI using EPI readouts and is exacerbated at higher field strengths due to increased B0 field inhomogeneity. Prominent correction schemes are based on B0 field-mapping or acquiring reverse phase-encoded (reversed-PE) data. However, to date, comparisons of these techniques in the context of fMRI have only been performed on 2DEPI data, either at lower field or lower resolution. In this study, we investigate distortion compensation in the context of sub-millimetre 3DEPI data at 7T. B0 field-mapping and reversed-PE distortion correction techniques were applied to both partial coverage BOLD-weighted and whole brain MT-weighted 3DEPI data with matched distortion. Qualitative assessment showed overall improvement in cortical alignment for both correction techniques in both 3DEPI fMRI and whole-brain MT-3DEPI datasets. The distortion-corrected MT-3DEPI images were quantitatively evaluated by comparing cortical alignment with an anatomical reference using dice coefficient (DC) and correlation ratio (CR) measures. These showed that B0 field-mapping and reversed-PE methods both improved correspondence between the MT-3DEPI and anatomical data, with more substantial improvements consistently obtained using the reversed-PE approach. Regional analyses demonstrated that the largest benefit of distortion correction, and in particular of the reversed-PE approach, occurred in frontal and temporal regions where susceptibility-induced distortions are known to be greatest, but had not led to complete signal dropout. In conclusion, distortion correction based on reversed-PE data has shown the greater capacity for achieving faithful alignment with anatomical data in the context of high-resolution fMRI at 7T using 3DEPI

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm

    Neural Representations of Visual Motion Processing in the Human Brain Using Laminar Imaging at 9.4 Tesla

    Get PDF
    During natural behavior, much of the motion signal falling into our eyes is due to our own movements. Therefore, in order to correctly perceive motion in our environment, it is important to parse visual motion signals into those caused by self-motion such as eye- or head-movements and those caused by external motion. Neural mechanisms underlying this task, which are also required to allow for a stable perception of the world during pursuit eye movements, are not fully understood. Both, perceptual stability as well as perception of real-world (i.e. objective) motion are the product of integration between motion signals on the retina and efference copies of eye movements. The central aim of this thesis is to examine whether different levels of cortical depth or distinct columnar structures of visual motion regions are differentially involved in disentangling signals related to self-motion, objective, or object motion. Based on previous studies reporting segregated populations of voxels in high level visual areas such as V3A, V6, and MST responding predominantly to either retinal or extra- retinal (‘real’) motion, we speculated such voxels to reside within laminar or columnar functional units. We used ultra-high field (9.4T) fMRI along with an experimental paradigm that independently manipulated retinal and extra-retinal motion signals (smooth pursuit) while controlling for effects of eye-movements, to investigate whether processing of real world motion in human V5/MT, putative MST (pMST), and V1 is associated to differential laminar signal intensities. We also examined motion integration across cortical depths in human motion areas V3A and V6 that have strong objective motion responses. We found a unique, condition specific laminar profile in human area V6, showing reduced mid-layer responses for retinal motion only, suggestive of an inhibitory retinal contribution to motion integration in mid layers or alternatively an excitatory contribution in deep and superficial layers. We also found evidence indicating that in V5/MT and pMST, processing related to retinal, objective, and pursuit motion are either integrated or colocalized at the scale of our resolution. In contrast, in V1, independent functional processes seem to be driving the response to retinal and objective motion on the one hand, and to pursuit signals on the other. The lack of differential signals across depth in these regions suggests either that a columnar rather than laminar segregation governs these functions in these areas, or that the methods used were unable to detect differential neural laminar processing. Furthermore, the thesis provides a thorough analysis of the relevant technical modalities used for data acquisition and data analysis at ultra-high field in the context of laminar fMRI. Relying on our technical implementations we were able to conduct two high-resolution fMRI experiments that helped us to further investigate the laminar organization of self-induced and externally induced motion cues in human high-level visual areas and to form speculations about the site and the mechanisms of their integration

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Enhanced phase congruency feature-based image registration for multimodal remote sensing imagery

    Get PDF
    Multimodal image registration is an essential image processing task in remote sensing. Basically, multimodal image registration searches for optimal alignment between images captured by different sensors for the same scene to provide better visualization and more informative images. Manual image registration is a tedious task and requires more effort, hence developing an automated image registration is very crucial to provide a faster and reliable solution. However, image registration faces many challenges from the nature of remote sensing image, the environment, and the technical shortcoming of the current methods that cause three issues, namely intensive processing power, local intensity variation, and rotational distortion. Since not all image details are significant, relying on the salient features will be more efficient in terms of processing power. Thus, the feature-based registration method was adopted as an efficient method to avoid intensive processing. The proposed method resolves rotation distortion issue using Oriented FAST and Rotated BRIEF (ORB) to produce invariant rotation features. However, since it is not intensity invariant, it cannot support multimodal data. To overcome the intensity variations issue, Phase Congruence (PC) was integrated with ORB to introduce ORB-PC feature extraction to generate feature invariance to rotation distortion and local intensity variation. However, the solution is not complete since the ORB-PC matching rate is below the expectation. Enhanced ORB-PC was proposed to solve the matching issue by modifying the feature descriptor. While better feature matches were achieved, a high number of outliers from multimodal data makes the common outlier removal methods unsuccessful. Therefore, the Normalized Barycentric Coordinate System (NBCS) outlier removal was utilized to find precise matches even with a high number of outliers. The experiments were conducted to verify the registration qualitatively and quantitatively. The qualitative experiment shows the proposed method has a broader and better features distribution, while the quantitative evaluation indicates improved performance in terms of registration accuracy by 18% compared to the related works

    Image Mosaicing and Super-resolution

    Full text link

    Deep learning for inverse problems in remote sensing: super-resolution and SAR despeckling

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore