269 research outputs found

    Minimum Variance Approaches to Ultrasound Pixel-Based Beamforming.

    Get PDF
    We analyze the principles underlying minimum variance distortionless response (MVDR) beamforming in order to integrate it into a pixel-based algorithm. There is a challenge posed by the low echo signal-to-noise ratio (eSNR) when calculating beamformer contributions at pixels far away from the beam centreline. Together with the well-known scarcity of samples for covariance matrix estimation, this reduces the beamformer performance and degrades the image quality. To address this challenge, we implement the MVDR algorithm in two different ways. First, we develop the conventional minimum variance pixel-based (MVPB) beamformer that performs the MVDR after the pixel-based superposition step. This involves a combination of methods in the literature, extended over multiple transmits to increase the eSNR. Then we propose the coherent MVPB beamformer, where the MVDR is applied to data within individual transmits. Based on pressure field analysis, we develop new algorithms to improve the data alignment and matrix estimation, and hence overcome the low-eSNR issue. The methods are demonstrated on data acquired with an ultrasound open platform. The results show the coherent MVPB beamformer substantially outperforms the conventional MVPB in a series of experiments, including phantom and in vivo studies. Compared to the unified pixel-based beamformer, the newest delay-and-sum algorithm in [1], the coherent MVPB performs well on regions that conform to the diffuse scattering assumptions on which the minimum variance principles are based. It produces less good results for parts of the image that are dominated by specular reflections

    Real-time GPU-based software beamformer designed for advanced imagingmethods research

    Get PDF
    High computational demand is known to be a technical hurdle for real-timeimplementation of advanced methods like synthetic aperture imaging (SAI) andplane wave imaging (PWI) that work with the pre-beamform data of each arrayelement. In this paper, we present the development of a software beamformer forSAI and PWI with real-time parallel processing capacity. Our beamformer designcomprises a pipelined group of graphics processing units (GPU) that are hostedwithin the same computer workstation. During operation, each available GPU isassigned to perform demodulation and beamforming for one frame of pre-beamformdata acquired from one transmit firing (e.g. point firing for SAI). Tofacilitate parallel computation, the GPUs have been programmed to treat thecalculation of depth pixels from the same image scanline as a block ofprocessing threads that can be executed concurrently, and it would repeat thisprocess for all scanlines to obtain the entire frame of image data i.e.low-resolution image (LRI). To reduce processing latency due to repeated accessof each GPU's global memory, we have made use of each thread block's fast-sharedmemory (to store an entire line of pre-beamform data during demodulation),created texture memory pointers, and utilized global memory caches (to streamrepeatedly used data samples during beamforming). Based on this beamformerarchitecture, a prototype platform has been implemented for SAI and PWI, and itsLRI processing throughput has been measured for test datasets with 40 MHzsampling rate, 32 receive channels, and imaging depths between 5-15 cm. Whenusing two Fermi-class GPUs (GTX-470), our beamformer can compute LRIs of512-by-255 pixels at over 3200 fps and 1300 fps respectively for imaging depthsof 5 cm and 15 cm. This processing throughput is roughly 3.2 times higher than aTesla-class GPU (GTX-275). © 2010 IEEE.published_or_final_versionThe 2010 IEEE International Ultrasonics Symposium, San Diego, CA., 11-14 October 2010. In Proceedings of IEEE IUS, 2010, p. 1920-192

    GPU-based beamformer: Fast realization of plane wave compounding and synthetic aperture imaging

    Get PDF
    Although they show potential to improve ultrasound image quality, plane wave (PW) compounding and synthetic aperture (SA) imaging are computationally demanding and are known to be challenging to implement in real-time. In this work, we have developed a novel beamformer architecture with the real-time parallel processing capacity needed to enable fast realization of PW compounding and SA imaging. The beamformer hardware comprises an array of graphics processing units (GPUs) that are hosted within the same computer workstation. Their parallel computational resources are controlled by a pixel-based software processor that includes the operations of analytic signal conversion, delay-and-sum beamforming, and recursive compounding as required to generate images from the channel-domain data samples acquired using PW compounding and SA imaging principles. When using two GTX-480 GPUs for beamforming and one GTX-470 GPU for recursive compounding, the beamformer can compute compounded 512 × 255 pixel PW and SA images at throughputs of over 4700 fps and 3000 fps, respectively, for imaging depths of 5 cm and 15 cm (32 receive channels, 40 MHz sampling rate). Its processing capacity can be further increased if additional GPUs or more advanced models of GPU are used. © 2011 IEEE.published_or_final_versio

    FPGA-Based Portable Ultrasound Scanning System with Automatic Kidney Detection

    Get PDF
    Bedsides diagnosis using portable ultrasound scanning (PUS) offering comfortable diagnosis with various clinical advantages, in general, ultrasound scanners suffer from a poor signal-to-noise ratio, and physicians who operate the device at point-of-care may not be adequately trained to perform high level diagnosis. Such scenarios can be eradicated by incorporating ambient intelligence in PUS. In this paper, we propose an architecture for a PUS system, whose abilities include automated kidney detection in real time. Automated kidney detection is performed by training the Viola–Jones algorithm with a good set of kidney data consisting of diversified shapes and sizes. It is observed that the kidney detection algorithm delivers very good performance in terms of detection accuracy. The proposed PUS with kidney detection algorithm is implemented on a single Xilinx Kintex-7 FPGA, integrated with a Raspberry Pi ARM processor running at 900 MHz

    A Spatial Coherence Approach to Minimum Variance Beamforming for Plane-Wave Compounding.

    Get PDF
    A new approach to implement minimum variance distortionless response (MVDR) beamforming is introduced for coherent plane-wave compounding (CPWC). MVDR requires the covariance matrix of the incoming signal to be estimated and a spatial smoothing approximation is usually adopted to prevent this calculation from being underconstrained. In the new approach, we analyze MVDR as a spatial filter that decorrelates signals received at individual channels before summation. Based on the analysis, we develop two MVDR beamformers without using any spatial smoothing. First, MVDR weights are applied to the received signals after accumulating the data over transmits at different angles, while the second involves weighting the data collected in individual transmits and compounding over the transducer elements. In both cases, the covariance matrix is estimated using a set of slightly different combinations of the echo data. We show the sufficient statistic for this estimation that can be described by approximating the correlation among the backscattered ultrasound signals to their spatial coherence. Using the van Cittert-Zernike theorem, their statistical similarity is assessed by relating the spatial coherence to the profile of the source intensity. Both spatial-coherence-based MVDR beamformers are evaluated on data sets acquired from simulation, phantom, and in vivo studies. Imaging results show that they offer improvements over simple coherent compounding in terms of spatial and contrast resolutions. They also outperform other existing MVDR-based methods in the literature that are applied to CPWC

    Ultrasound pixel-based beamforming with phase alignments of focused beams

    Get PDF
    We previously developed unified pixel-based beamforming to generate high-resolution sonograms, based on field pattern analysis. In this framework, we found the transmit wave-shape away from the focus could be characterized by two spherical pulses. These correspond to the maximal and minimal distances from the imaging point to the active aperture. The beamformer uses this model to select the highest-energy signals from backscattered data. A spatiotemporal interpolation formula is used to provide a smooth transition in regions near the focal depth where there is no dominant reflected pulse. In this paper, we show that the unified pixel-based approach is less robust at lower center frequencies. The interpolated data is suboptimal for a longer transmit wave-shape. As a result, the spatial resolution at the focal depth is lower than that in other regions. By further exploring the field pattern, we propose a beamformer that is more robust to variations in beam-width. The new method, named coherent pixel-based beamforming, aligns and compounds the pulse data directly in the transition regions. In simulation and phantom studies, the coherent pixel-based approach is shown to outperform unified pixel-based in spatial resolution. It helps regain optimal resolution at the focal depth while still maintaining good image quality in other regions. We also demonstrate the new method on in vivo data where its improvements over unified pixel-based are demonstrated on scanned objects with a more complicated structure
    corecore