2,533 research outputs found

    Analysis and Optimization of Synthetic Aperture Ultrasound Imaging Using the Effective Aperture Approach

    Get PDF
    An effective aperture approach is used as a tool for analysis and parameter optimization of mostly known ultrasound imaging systems - phased array systems, compounding systems and synthetic aperture imaging systems. Both characteristics of an imaging system, the effective aperture function and the corresponding two-way radiation pattern, provide information about two of the most important parameters of images produced by an ultrasound system - lateral resolution and contrast. Therefore, in the design, optimization of the effective aperture function leads to optimal choice of such parameters of an imaging systems that influence on lateral resolution and contrast of images produced by this imaging system. It is shown that the effective aperture approach can be used for optimization of a sparse synthetic transmit aperture (STA) imaging system. A new two-stage algorithm is proposed for optimization of both the positions of the transmitted elements and the weights of the receive elements. The proposed system employs a 64-element array with only four active elements used during transmit. The numerical results show that Hamming apodization gives the best compromise between the contrast of images and the lateral resolution

    Overlapped Fourier coding for optical aberration removal

    Get PDF
    We present an imaging procedure that simultaneously optimizes a camera’s resolution and retrieves a sample’s phase over a sequence of snapshots. The technique, termed overlapped Fourier coding (OFC), first digitally pans a small aperture across a camera’s pupil plane with a spatial light modulator. At each aperture location, a unique image is acquired. The OFC algorithm then fuses these low-resolution images into a full-resolution estimate of the complex optical field incident upon the detector. Simultaneously, the algorithm utilizes redundancies within the acquired dataset to computationally estimate and remove unknown optical aberrations and system misalignments via simulated annealing. The result is an imaging system that can computationally overcome its optical imperfections to offer enhanced resolution, at the expense of taking multiple snapshots over time

    In situ correction of liquid meniscus in cell culture imaging system based on parallel Fourier ptychographic microscopy (96 Eyes)

    Get PDF
    We collaborated with Amgen and spent five years in designing and fabricating next generation multi-well plate imagers based on Fourier ptychographic microscopy (FPM). A 6-well imager (Emsight) and a low-cost parallel microscopic system (96 Eyes) based on parallel FPM were reported in our previous work. However, the effect of liquid meniscus on the image quality is much stronger than anticipated, introducing obvious wavevector misalignment and additional image aberration. To this end, an adaptive wavevector correction (AWC-FPM) algorithm and a pupil recovery improvement strategy are presented to solve these challenges in situ. In addition, dual-channel fluorescence excitation is added to obtain structural information for microbiologists. Experiments are demonstrated to verify their performances. The accuracy of angular resolution with our algorithm is within 0.003 rad. Our algorithms would make the FPM algorithm more robust and practical and can be extended to other FPM-based applications to overcome similar challenges

    Experimental 3-D Ultrasound Imaging with 2-D Sparse Arrays using Focused and Diverging Waves

    Get PDF
    International audienceThree dimensional ultrasound (3-D US) imaging methods based on 2-D array probes are increasingly investigated. However, the experimental test of new 3-D US approaches is contrasted by the need of controlling very large numbers of probe elements. Although this problem may be overcome by the use of 2-D sparse arrays, just a few experimental results have so far corroborated the validity of this approach. In this paper, we experimentally compare the performance of a fully wired 1024-element (32 × 32) array, assumed as reference, to that of a 256-element random and of an " optimized " 2-D sparse array, in both focused and compounded diverging wave (DW) transmission modes. The experimental results in 3-D focused mode show that the resolution and contrast produced by the optimized sparse array are close to those of the full array while using 25% of elements. Furthermore, the experimental results in 3-D DW mode and 3-D focused mode are also compared for the first time and they show that both the contrast and the resolution performance are higher when using the 3-D DW at volume rates up to 90/second which represent a 36x speed up factor compared to the focused mode

    Algorithm and Hardware Design for High Volume Rate 3-D Medical Ultrasound Imaging

    Get PDF
    abstract: Ultrasound B-mode imaging is an increasingly significant medical imaging modality for clinical applications. Compared to other imaging modalities like computed tomography (CT) or magnetic resonance imaging (MRI), ultrasound imaging has the advantage of being safe, inexpensive, and portable. While two dimensional (2-D) ultrasound imaging is very popular, three dimensional (3-D) ultrasound imaging provides distinct advantages over its 2-D counterpart by providing volumetric imaging, which leads to more accurate analysis of tumor and cysts. However, the amount of received data at the front-end of 3-D system is extremely large, making it impractical for power-constrained portable systems. In this thesis, algorithm and hardware design techniques to support a hand-held 3-D ultrasound imaging system are proposed. Synthetic aperture sequential beamforming (SASB) is chosen since its computations can be split into two stages, where the output generated of Stage 1 is significantly smaller in size compared to the input. This characteristic enables Stage 1 to be done in the front end while Stage 2 can be sent out to be processed elsewhere. The contributions of this thesis are as follows. First, 2-D SASB is extended to 3-D. Techniques to increase the volume rate of 3-D SASB through a new multi-line firing scheme and use of linear chirp as the excitation waveform, are presented. A new sparse array design that not only reduces the number of active transducers but also avoids the imaging degradation caused by grating lobes, is proposed. A combination of these techniques increases the volume rate of 3-D SASB by 4\texttimes{} without introducing extra computations at the front end. Next, algorithmic techniques to further reduce the Stage 1 computations in the front end are presented. These include reducing the number of distinct apodization coefficients and operating with narrow-bit-width fixed-point data. A 3-D die stacked architecture is designed for the front end. This highly parallel architecture enables the signals received by 961 active transducers to be digitalized, routed by a network-on-chip, and processed in parallel. The processed data are accumulated through a bus-based structure. This architecture is synthesized using TSMC 28 nm technology node and the estimated power consumption of the front end is less than 2 W. Finally, the Stage 2 computations are mapped onto a reconfigurable multi-core architecture, TRANSFORMER, which supports different types of on-chip memory banks and run-time reconfigurable connections between general processing elements and memory banks. The matched filtering step and the beamforming step in Stage 2 are mapped onto TRANSFORMER with different memory configurations. Gem5 simulations show that the private cache mode generates shorter execution time and higher computation efficiency compared to other cache modes. The overall execution time for Stage 2 is 14.73 ms. The average power consumption and the average Giga-operations-per-second/Watt in 14 nm technology node are 0.14 W and 103.84, respectively.Dissertation/ThesisDoctoral Dissertation Engineering 201

    A probabilistic approach for the optimisation of ultrasonic array inspection techniques

    Get PDF
    AbstractUltrasonic arrays are now used routinely for the inspection of engineering structures in order to maintain their integrity and assess their performance. Such inspections are usually optimised manually using empirical measurements and parametric studies which are laborious, time-consuming, and may not result in an optimal approach. In this paper, a general framework for the optimisation of ultrasonic array inspection techniques in NDE is presented. Defect detection rate is set as the main inspection objective and used to assess the performance of the optimisation framework. Statistical modelling of the inspection is used to form the optimisation problem and incorporate inspection uncertainty such as crack type and location, material properties and geometry, etc. A genetic algorithm is used to solve the global optimisation problem. As a demonstration, the optimisation framework is used with two objective functions based on array signal amplitude and signal-to-noise ratio (SNR). The optimal use of plane B-scan and total focusing method imaging algorithms is also investigated. The performance of the optimisation scheme is explored in simulation and then validated experimentally. It has been found that, for the inspection scenarios considered, TFM provides better detectability in a statistical sense than plane B-scan imaging in scenarios where uncertainty in the inspection is expected

    Satellite SAR Interferometry for Earth’s Crust Deformation Monitoring and Geological Phenomena Analysis

    Get PDF
    Synthetic aperture radar interferometry (InSAR) and the related processing techniques provide a unique tool for the quantitative measurement of the Earth’s surface deformation associated with certain geophysical processes (such as volcanic eruptions, landslides and earthquakes), thus making possible long-term monitoring of surface deformation and analysis of relevant geodynamic phenomena. This chapter provides an application-oriented perspective on the spaceborne InSAR technology with emphasis on subsequent geophysical investigations. First, the fundamentals of radar interferometry and differential interferometry, as well as error sources, are briefly introduced. Emphasis is then placed on the realistic simulation of the underlying geophysics processes, thus offering an unfolded perspective on both analytical and numerical approaches for modeling deformation sources. Finally, various experimental investigations conducted by acquiring SAR multitemporal observations on areas subject to deformation processes of particular geological interest are presented and discussed

    Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    Get PDF
    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution model, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Restricted offset range migration analysis provides insights into both pre-critical and gradient reflections in the dataset.Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa and SeisOpt Pro velocity modeling softwares. Plotrefa uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt Pro utilizes the simulated annealing algorithm, also produces shallow velocity artifacts, and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a more clear picture of the rangefront fault. This technique can readily be applied to existing datasets and could replace the existing strategy of forward modeling to match gravity data
    • …
    corecore