42 research outputs found

    SAR Image Formation via Subapertures and 2D Backprojection

    Get PDF
    Radar imaging requires the use of wide bandwidth and a long coherent processing interval, resulting in range and Doppler migration throughout the observation period. This migration must be compensated in order to properly image a scene of interest at full resolution and there are many available algorithms having various strengths and weaknesses. Here, a subaperture-based imaging algorithm is proposed, which first forms range-Doppler (RD) images from slow-time sub-intervals, and then coherently integrates over the resulting coarse-resolution RD maps to produce a full resolution SAR image. A two-dimensional backprojection-style approach is used to perform distortion-free integration of these RD maps. This technique benefits from many of the same benefits as traditional backprojection; however, the architecture of the algorithm is chosen such that several steps are shared with typical target detection algorithms. These steps are chosen such that no compromises need to be made to data quality, allowing for high quality imaging while also preserving data for implementation of detection algorithms. Additionally, the algorithm benefits from computational savings that make it an excellent imaging algorithm for implementation in a simultaneous SAR-GMTI architecture

    Time domain based image generation for synthetic aperture radar on field programmable gate arrays

    Get PDF
    Aerial images are important in different scenarios including surface cartography, surveillance, disaster control, height map generation, etc. Synthetic Aperture Radar (SAR) is one way to generate these images even through clouds and in the absence of daylight. For a wide and easy usage of this technology, SAR systems should be small, mounted to Unmanned Aerial Vehicles (UAVs) and process images in real-time. Since UAVs are small and lightweight, more robust (but also more complex) time-domain algorithms are required for good image quality in case of heavy turbulence. Typically the SAR data set size does not allow for ground transmission and processing, while the UAV size does not allow for huge systems and high power consumption to process the data. A small and energy-efficient signal processing system is therefore required. To fill the gap between existing systems that are capable of either high-speed processing or low power consumption, the focus of this thesis is the analysis, design, and implementation of such a system. A survey shows that most architectures either have to high power budgets or too few processing capabilities to match real-time requirements for time-domain-based processing. Therefore, a Field Programmable Gate Array (FPGA) based system is designed, as it allows for high performance and low-power consumption. The Global Backprojection (GBP) is implemented, as it is the standard time-domain-based algorithm which allows for highest image quality at arbitrary trajectories at the complexity of O(N3). To satisfy real-time requirements under all circumstances, the accelerated Fast Factorized Backprojection (FFBP) algorithm with a complexity of O(N2logN) is implemented as well, to allow for a trade-off between image quality and processing time. Additionally, algorithm and design are enhanced to correct the failing assumptions for Frequency Modulated Continuous Wave (FMCW) Radio Detection And Ranging (Radar) data at high velocities. Such sensors offer high-resolution data at considerably low transmit power which is especially interesting for UAVs. A full analysis of all algorithms is carried out, to design a highly utilized architecture for maximum throughput. The process covers the analysis of mathematical steps and approximations for hardware speedup, the analysis of code dependencies for instruction parallelism and the analysis of streaming capabilities, including memory access and caching strategies, as well as parallelization considerations and pipeline analysis. Each architecture is described in all details with its surrounding control structure. As proof of concepts, the architectures are mapped on a Virtex 6 FPGA and results on resource utilization, runtime and image quality are presented and discussed. A special framework allows to scale and port the design to other FPGAs easily and to enable for maximum resource utilization and speedup. The result is streaming architectures that are capable of massive parallelization with a minimum in system stalls. It is shown that real-time processing on FPGAs with strict power budgets in time-domain is possible with the GBP (mid-sized images) and the FFBP (any image size with a trade-off in quality), allowing for a UAV scenario

    Machine Learning for Beamforming in Audio, Ultrasound, and Radar

    Get PDF
    Multi-sensor signal processing plays a crucial role in the working of several everyday technologies, from correctly understanding speech on smart home devices to ensuring aircraft fly safely. A specific type of multi-sensor signal processing called beamforming forms a central part of this thesis. Beamforming works by combining the information from several spatially distributed sensors to directionally filter information, boosting the signal from a certain direction but suppressing others. The idea of beamforming is key to the domains of audio, ultrasound, and radar. Machine learning is the other central part of this thesis. Machine learning, and especially its sub-field of deep learning, has enabled breakneck progress in tackling several problems that were previously thought intractable. Today, machine learning powers many of the cutting edge systems we see on the internet for image classification, speech recognition, language translation, and more. In this dissertation, we look at beamforming pipelines in audio, ultrasound, and radar from a machine learning lens and endeavor to improve different parts of the pipelines using ideas from machine learning. We start off in the audio domain and derive a machine learning inspired beamformer to tackle the problem of ensuring the audio captured by a camera matches its visual content, a problem we term audiovisual zooming. Staying in the audio domain, we then demonstrate how deep learning can be used to improve the perceptual qualities of speech by denoising speech clipping, codec distortions, and gaps in speech. Transitioning to the ultrasound domain, we improve the performance of short-lag spatial coherence ultrasound imaging by exploiting the differences in tissue texture at each short lag value by applying robust principal component analysis. Next, we use deep learning as an alternative to beamforming in ultrasound and improve the information extraction pipeline by simultaneously generating both a segmentation map and B-mode image of high quality directly from raw received ultrasound data. Finally, we move to the radar domain and study how deep learning can be used to improve signal quality in ultra-wideband synthetic aperture radar by suppressing radio frequency interference, random spectral gaps, and contiguous block spectral gaps. By training and applying the networks on raw single-aperture data prior to beamforming, it can work with myriad sensor geometries and different beamforming equations, a crucial requirement in synthetic aperture radar

    Improving the Image Quality of Synthetic Transmit Aperture Ultrasound Images - Achieving Real-Time In-Vivo Imaging

    Get PDF

    A Study in GPS-Denied Navigation Using Synthetic Aperture Radar

    Get PDF
    In modern navigation systems, GPS is vital to accurately piloting a vehicle. This is especially true in autonomous vehicles, such as UAVs, which have no pilot. Unfortunately, GPS signals can be easily jammed or spoofed. For example, canyons and urban cities create an environment where the sky is obstructed and make GPS signals unreliable. Additionally, hostile individuals can transmit personal signals intended to block or spoof GPS signals. In these situations, it is important to find a means of navigation that doesn’t rely on GPS. Navigating without GPS means that other types of sensors or instruments must be used to replace the information lost from GPS. Some examples of additional sensors include cameras, altimeters, magnetometers, and radar. The work presented in this thesis shows how radar can be used to navigate without GPS. Specifically, synthetic aperture radar (SAR) is used, which is a method of processing radar data to form images of a landscape similar to images captured using a camera. SAR presents its own unique set of benefits and challenges. One major benefit of SAR is that it can produce images of an area even at night or through cloud cover. Additionally, SAR can image a wide swath of land at an angle that would be difficult for a camera to achieve. However, SAR is more computationally complex than other imaging sensors. Image quality is also highly dependent on the quality of navigation information available. In general, SAR requires that good navigation data be had in order to form SAR images. The research here explores the reverse problem where SAR images are formed without good navigation data and then good navigation data is inferred from the images. This thesis performs feasibility studies and real data implementations that show how SAR can be used in navigation without the presence of GPS. Derivations and background materials are provided. Validation methods and additional discussions are provided on the results of each portion of research

    Factorized Geometrical Autofocus for Synthetic Aperture Radar Processing

    Get PDF
    This paper describes a factorized geometrical autofocus (FGA) algorithm, specifically suitable for ultrawideband synthetic aperture radar. The strategy is integrated in a fast factorized back-projection chain and relies on varying track parameters step by step to obtain a sharp image; focus measures are provided by an object function (intensity correlation). The FGA algorithm has been successfully applied on synthetic and real (Coherent All RAdio BAnd System II) data sets, i.e., with false track parameters introduced prior to processing, to set up constrained problems involving one geometrical quantity. Resolution (3 dB in azimuth and slant range) and peak-to-sidelobe ratio measurements in FGA images are comparable with reference results (within a few percent and tenths of a decibel), demonstrating the capacity to compensate for residual space variant range cell migration. The FGA algorithm is finally also benchmarked (visually) against the phase gradient algorithm to emphasize the advantage of a geometrical autofocus approach

    Technique-Based Exploitation Of Low Grazing Angle SAR Imagery Of Ship Wakes

    Get PDF
    The pursuit of the understanding of the effect a ship has on water is a field of study that is several hundreds of years old, accelerated during the years of the industrial revolution where the efficiency of a ship’s engine and hull determined the utility of the burgeoning globally important sea lines of communication. The dawn of radar sensing and electronic computation have expanding this field of study still further where new ground is still being broken. This thesis looks to address a niche area of synthetic aperture radar imagery of ship wakes, specifically the imaging geometry utilising a low grazing angle, where significant non-linear effects are often dominant in the environment. The nuances of the synthetic aperture radar processing techniques compounded with the low grazing angle geometry to produce unusual artefacts within the imagery. It is the understanding of these artefacts that is central to this thesis. A sub-aperture synthetic aperture radar technique is applied to real data alongside coarse modelling of a ship and its wake before finally developing a full hydrodynamic model for a ship’s wake from first principles. The model is validated through comparison with previously developed work. The analysis shows that the resultant artefacts are a culmination of individual synthetic aperture radar anomalies and the reaction of the radar energy to the ambient sea surface and spike events

    Front-end receiver for miniaturised ultrasound imaging

    Get PDF
    Point of care ultrasonography has been the focus of extensive research over the past few decades. Miniaturised, wireless systems have been envisaged for new application areas, such as capsule endoscopy, implantable ultrasound and wearable ultrasound. The hardware constraints of such small-scale systems are severe, and tradeoffs between power consumption, size, data bandwidth and cost must be carefully balanced. To address these challenges, two synthetic aperture receiver architectures are proposed and compared. The architectures target highly miniaturised, low cost, B-mode ultrasound imaging systems. The first architecture utilises quadrature (I/Q) sampling to minimise the signal bandwidth and computational load. Synthetic aperture beamforming is carried out using a single-channel, pipelined protocol in order to minimise system complexity and power consumption. A digital beamformer dynamically apodises and focuses the data by interpolating and applying complex phase rotations to the I/Q samples. The beamformer is implemented on a Spartan-6 FPGA and consumes 296mW for a frame rate of 7Hz. The second architecture employs compressive sensing within the finite rate of innovation (FRI) framework to further reduce the data bandwidth. Signals are sampled below the Nyquist frequency, and then transmitted to a digital back-end processor, which reconstructs I/Q components non-linearly, and then carries out synthetic aperture beamforming. Both architectures were tested in hardware using a single-channel analogue front-end (AFE) that was designed and fabricated in AMS 0.35μm CMOS. The AFE demodulates RF ultrasound signals sequentially into I/Q components, and comprises a low-noise preamplifier, mixer, programmable gain amplifier (PGA) and lowpass filter. A variable gain low noise preamplifier topology is used to enable quasi-exponential time-gain control (TGC). The PGA enables digital selection of three gain values (15dB, 22dB and 25.5dB). The bandwidth of the lowpass filter is also selectable between 1.85MHz, 510kHz and 195kHz to allow for testing of both architectural frameworks. The entire AFE consumes 7.8 mW and occupies an area of 1.5×1.5 mm. In addition to the AFE, this thesis also presents the design of a pseudodifferential, log-domain multiplier-filter or “multer” which demodulates low-RF signals in the current-domain. This circuit targets high impedance transducers such as capacitive micromachined ultrasound transducers (CMUTs) and offers a 20dB improvement in dynamic range over the voltage-mode AFE. The bandwidth is also electronically tunable. The circuit was implemented in 0.35μm BiCMOS and was simulated in Cadence; however, no fabrication results were obtained for this circuit. B-mode images were obtained for both architectures. The quadrature SAB method yields a higher image SNR and 9% lower root mean squared error with respect to the RF-beamformed reference image than the compressive SAB method. Thus, while both architectures achieve a significant reduction in sampling rate, system complexity and area, the quadrature SAB method achieves better image quality. Future work may involve the addition of multiple receiver channels and the development of an integrated system-on-chip.Open Acces

    Three Dimensional Bistatic Tomography Using HDTV

    Get PDF
    The thesis begins with a review of the principles of diffraction and reflection tomography; starting with the analytic solution to the inhomogeneous Helmholtz equation, after linearization by the Born approximation (the weak scatterer solution), and arriving at the Filtered Back Projection (Propagation) method of reconstruction. This is followed by a heuristic derivation more directly couched in the radar imaging context, without the rigor of the general inverse problem solution and more closely resembling an imaging turntable or inverse synthetic aperture radar. The heuristic derivation leads into the concept of the line integral and projections (the Radon Transform), followed by more general geometries where the plane wave approximation is invalid. We proceed next to study of the dependency of reconstruction on the space-frequency trajectory, combining the spatial aperture and waveform. Two and three dimensional apertures, monostatic and bistatic, fully and sparsely sampled and including partial apertures, with controlled waveforms (CW and pulsed, with and without modulation) define the filling of k-space and concomitant reconstruction performance. Theoretical developments in the first half of the thesis are applied to the specific example of bistatic tomographic imaging using High Definition Television (HDTV); the United States version of DVB-T. Modeling of the HDTV waveform using pseudonoise modulation to represent the hybrid 8VSB HDTV scheme and the move-stop-move approximation established the imaging potential, employing an idealized, isotropic 18 scatterer. As the move-stop-move approximation places a limitation on integration time (in cross correlation/pulse compression) due to transmitter/receiver motion, an exact solution for compensation of Doppler distortion is derived. The concept is tested with the assembly and flight test of a bistatic radar system employing software-defined radios (SDR). A three dimensional, bistatic collection aperture, exploiting an elevated commercial HDTV transmitter, is focused to demonstrate the principle. This work, to the best of our knowledge, represents a first in the formation of three dimensional images using bistatically-exploited television transmitters

    Phase Unwrapping in the Presence of Strong Turbulence

    Get PDF
    Phase unwrapping in the presence of branch points using a least-squares wave-front reconstructor requires the use of a Postprocessing Congruence Operation (PCO). Branch cuts in the unwrapped phase are altered by the addition of a constant parameter h to the rotational component when applying the PCO. Past research has shown that selecting a value of h which minimizes the proportion of irradiance in the pupil plane adjacent to branch cuts helps to maximize performance of adaptive-optics (AO) systems in strong turbulence. In continuation of this objective, this research focuses on optimizing the PCO while accounting for the cumulative effects of the integral control law. Several optimizations are developed and compared using wave-optics simulations. The most successful optimization is shown to reduce the normalized variance of the Strehl ratio across a wide range turbulence strengths and frame rates, including decreases of up to 25 percent when compared to a non-optimized PCO algorithm. AO systems which depend on high, steady Strehl ratio values serve to benefit from these algorithms
    corecore