54 research outputs found

    A Generalized Phase Gradient Autofocus Algorithm

    Get PDF
    The phase gradient autofocus (PGA) algorithm has seen widespread use and success within the synthetic aperture radar (SAR) imaging community. However, its use and success has largely been limited to collection geometries where either the polar format algorithm (PFA) or range migration algorithm is suitable for SAR image formation. In this work, a generalized phase gradient autofocus (GPGA) algorithm is developed which is applicable with both the PFA and backprojection algorithm (BPA), thereby directly supporting a wide range of collection geometries and SAR imaging modalities. The GPGA algorithm preserves the four crucial signal processing steps comprising the PGA algorithm, while alleviating the constraint of using a single scatterer per range cut for phase error estimation which exists with the PGA algorithm. Moreover, the GPGA algorithm, whether using the PFA or BPA, yields an approximate maxi- mum marginal likelihood estimate (MMLE) of phase errors having marginalized over unknown complex-valued reflectivities of selected scatterers. Also, in this work a new approximate MMLE, termed the max-semidefinite relaxation (Max-SDR) phase estimator, is proposed for use with the GPGA algorithm. The Max-SDR phase estimator provides a phase error estimate with a worst-case approximation bound compared to the solution set of MMLEs (i.e., solution set to the non-deterministic polynomial- time hard (NP-hard) GPGA phase estimation problem). Moreover, in this work a specialized interior-point method is presented for more efficiently performing Max- SDR phase estimation by exploiting low-rank structure typically associated with the GPGA phase estimation problem. Lastly, simulation and experimental results produced by applying the GPGA algorithm with the PFA and BPA are presented

    Autofocus and Back-Projection in Synthetic Aperture Radar Imaging.

    Full text link
    Spotlight-mode Synthetic Aperture Radar (SAR) imaging has received considerable attention due to its ability to produce high-resolution images of scene reflectivity. One of the main challenges in successful image recovery is the problem of defocusing, which occurs due to inaccuracies in the estimated round-trip delays of the transmitted radar pulses. The problem is most widely studied for far-field imaging scenarios with a small range of look angles since the problem formulation can be significantly simplified under the assumptions of planar wavefronts and one-dimensional defocusing. In practice, however, these assumptions are frequently violated. MultiChannel Autofocus (MCA) is a subspace-based approach to the defocusing problem that was originally proposed for far-field imaging, with a small range of look angles. A key motivation behind MCA is the observation that there exists a low-return region within the recovered image, due to the weak illumination near the edges of the antenna footprint. The strength of the MCA formulation is that it can be easily extended to more realistic scenarios with polar-format data, spherical wavefronts, and arbitrary terrain, due to its flexible linear-algebraic framework. The main aim of this thesis is to devise a more broadly effective autofocus approach by adopting MCA to the aforementioned scenarios. By forming the solution space in a domain where the defocusing effect is truly one-dimensional, we show that drastically improved restorations can be obtained for applications with small to fairly wide ranges of look angles. When the terrain topography is known, we utilize the versatile backprojection-based imaging methods in the model formulations for MCA to accurately account for the underlying geometry. The proposed extended MCA shows reductions in RMSE of up to 50% when the underlying terrain is highly elevated. We also analyze the effects of the filtering step, the amount of wave curvature, the shape of the terrain, and the flight path of the radar, on the reconstructed image via backprojection. Finally, we discuss the selection of low-return constraints and the importance of using terrain elevation within MCA formulation.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/135868/1/zzon_1.pd

    Space-Variant Post-Filtering for Wavefront Curvature Correction in Polar-Formatted Spotlight-Mode SAR Imagery

    Full text link

    Bistatic synthetic aperture radar imaging using Fournier methods

    Get PDF

    Motion Compensation for Near-Range Synthetic Aperture Radar Applications

    Get PDF
    The work focuses on the analysis of influences of motion errors on near-range SAR applications and design of specific motion measuring and compensation algorithms. First, a novel metric to determine the optimum antenna beamwidth is proposed. Then, a comprehensive investigation of influences of motion errors on the SAR image is provided. On this ground, new algorithms for motion measuring and compensation using low cost inertial measurement units (IMU) are developed and successfully demonstrated

    Opportunistic radar imaging using a multichannel receiver

    Get PDF
    Bistatic Synthetic Aperture Radars have a physically separated transmitter and receiver where one or both are moving. Besides the advantages of reduced procurement and maintenance costs, the receiving system can sense passively while remaining covert which offers obvious tactical advantages. In this work, spaceborne monostatic SARs are used as emitters of opportunity with a stationary ground-based receiver. The imaging mode of SAR systems over land is usually a wide-swath mode such as ScanSAR or TOPSAR in which the antenna scans the area of interest in range to image a larger swath at the expense of degraded cross-range resolution compared to the conventional stripmap mode. In the bistatic geometry considered here, the signals from the sidelobes of the scanning beams illuminating the adjacent sub-swath are exploited to produce images with high cross-range resolution from data obtained from a SAR system operating in wide-swath mode. To achieve this, the SAR inverse problem is rigorously formulated and solved using a Maximum A Posteriori estimation method providing enhanced cross-range resolution compared to that obtained by classical burst-mode SAR processing. This dramatically increases the number of useful images that can be produced using emitters of opportunity. Signals from any radar satellite in the receiving band of the receiver can be used, thus further decreasing the revisit time of the area of interest. As a comparison, a compressive sensing-based method is critically analysed and proves more sensitive to off-grid targets and only suited to sparse scene. The novel SAR imaging method is demonstrated using simulated data and real measurements from C-band satellites such as RADARSAT-2 and ESA’s satellites ERS-2, ENVISAT and Sentinel-1A. In addition, this thesis analyses the main technological issues in bistatic SAR such as the azimuth-variant characteristic of bistatic data and the effect of imperfect synchronisation between the non-cooperative transmitter and the receiver

    Signal Processing for Synthetic Aperture Sonar Image Enhancement

    Get PDF
    This thesis contains a description of SAS processing algorithms, offering improvements in Fourier-based reconstruction, motion-compensation, and autofocus. Fourier-based image reconstruction is reviewed and improvements shown as the result of improved system modelling. A number of new algorithms based on the wavenumber algorithm for correcting second order effects are proposed. In addition, a new framework for describing multiple-receiver reconstruction in terms of the bistatic geometry is presented and is a useful aid to understanding. Motion-compensation techniques for allowing Fourier-based reconstruction in widebeam geometries suffering large-motion errors are discussed. A motion-compensation algorithm exploiting multiple receiver geometries is suggested and shown to provide substantial improvement in image quality. New motion compensation techniques for yaw correction using the wavenumber algorithm are discussed. A common framework for describing phase estimation is presented and techniques from a number of fields are reviewed within this framework. In addition a new proof is provided outlining the relationship between eigenvector-based autofocus phase estimation kernels and the phase-closure techniques used astronomical imaging. Micronavigation techniques are reviewed and extensions to the shear average single-receiver micronavigation technique result in a 3 - 4 fold performance improvement when operating on high-contrast images. The stripmap phase gradient autofocus (SPGA) algorithm is developed and extends spotlight SAR PGA to the wide-beam, wide-band stripmap geometries common in SAS imaging. SPGA supersedes traditional PGA-based stripmap autofocus algorithms such as mPGA and PCA - the relationships between SPGA and these algorithms is discussed. SPGA's operation is verified on simulated and field-collected data where it provides significant image improvement. SPGA with phase-curvature based estimation is shown and found to perform poorly compared with phase-gradient techniques. The operation of SPGA on data collected from Sydney Harbour is shown with SPGA able to improve resolution to near the diffraction-limit. Additional analysis of practical stripmap autofocus operation in presence of undersampling and space-invariant blurring is presented with significant comment regarding the difficulties inherent in autofocusing field-collected data. Field-collected data from trials in Sydney Harbour is presented along with associated autofocus results from a number of algorithms

    Coherent Change Detection Under a Forest Canopy

    Get PDF
    Coherent change detection (CCD) is an established technique for remotely monitoring landscapes with minimal vegetation or buildings. By evaluating the local complex correlation between a pair of synthetic aperture radar (SAR) images acquired on repeat passes of an airborne or spaceborne imaging radar system, a map of the scene coherence is obtained. Subtle disturbances of the ground are detected as areas of low coherence in the surface clutter. This thesis investigates extending CCD to monitor the ground in a forest. It is formulated as a multichannel dual-layer coherence estimation problem, where the coherence of scattering from the ground is estimated after suppressing interference from the canopy by vertically beamforming multiple image channels acquired at slightly different grazing angles on each pass. This 3D SAR beamforming must preserve the phase of the ground response. The choice of operating wavelength is considered in terms of the trade-off between foliage penetration and change sensitivity. A framework for comparing the performance of different radar designs and beamforming algorithms, as well as assessing the sensitivity to error, is built around the random-volume-over-ground (RVOG) model of forest scattering. If the ground and volume scattering contributions in the received echo are of similar strength, it is shown that an L-band array of just three channels can provide enough volume attenuation to permit reasonable estimation of the ground coherence. The proposed method is demonstrated using an RVOG clutter simulation and a modified version of the physics-based SAR image simulator PolSARproSim. Receiver operating characteristics show that whilst ordinary single-channel CCD is unusable when a canopy is present, 3D SAR CCD permits reasonable detection performance. A novel polarimetric filtering algorithm is also proposed to remove contributions from the ground-trunk double-bounce scattering mechanism, which may mask changes on the ground near trees. To enable this kind of polarimetric processing, fully polarimetric data must be acquired and calibrated. Motivated by an interim version of the Ingara airborne imaging radar, which used a pair of helical antennas to acquire circularly polarised data, techniques for the estimation of polarimetric distortion in the circular basis are investigated. It is shown that the standard approach to estimating cross-talk in the linear basis, whereby expressions for the distortion of reflection-symmetric clutter are linearised and solved, cannot be adapted to the circular basis, because the first-order effects of individual cross-talk parameters cannot be distinguished. An alternative approach is proposed that uses ordinary and gridded trihedral corner reflectors, and optionally dihedrals, to iteratively estimate the channel imbalance and cross-talk parameters. Monte Carlo simulations show that the method reliably converges to the true parameter values. Ingara data is calibrated using the method, with broadly consistent parameter estimates obtained across flights. Genuine scene changes may be masked by coherence loss that arises when the bands of spatial frequencies supported by the two passes do not match. Trimming the spatial-frequency bands to their common area of support would remove these uncorrelated contributions, but the bands, and therefore the required trim, depend on the effective collection geometry at each pixel position. The precise dependence on local slope and collection geometry is derived in this thesis. Standard methods of SAR image formation use a flat focal plane and allow only a single global trim, which leads to spatially varying coherence loss when the terrain is undulating. An image-formation algorithm is detailed that exploits the flexibility offered by back-projection not only to focus the image onto a surface matched to the scene topography but also to allow spatially adaptive trimming. Improved coherence is demonstrated in simulation and using data from two airborne radar systems.Thesis (Ph.D.) -- University of Adelaide, School of Electrical & Electronic Engineering, 202

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering

    Super-resolution:A comprehensive survey

    Get PDF
    corecore