29 research outputs found

    Non-cooperative bistatic SAR clock drift compensation for tomographic acquisitions

    Get PDF
    In the last years, an important amount of research has been headed towards the measurement of above-ground forest biomass with polarimetric Synthetic Aperture Radar (SAR) tomography techniques. This has motivated the proposal of future bistatic SAR missions, like the recent non-cooperative SAOCOM-CS and PARSIFAL from CONAE and ESA. It is well known that the quality of SAR tomography is directly related to the phase accuracy of the interferometer that, in the case of non-cooperative systems, can be particularly affected by the relative drift between onboard clocks. In this letter, we provide insight on the impact of the clock drift error on bistatic interferometry, as well as propose a correction algorithm to compensate its effect. The accuracy of the compensation is tested on simulated acquisitions over volumetric targets, estimating the final impact on tomographic profiles

    Radar Imaging in Challenging Scenarios from Smart and Flexible Platforms

    Get PDF
    undefine

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering

    A Study in GPS-Denied Navigation Using Synthetic Aperture Radar

    Get PDF
    In modern navigation systems, GPS is vital to accurately piloting a vehicle. This is especially true in autonomous vehicles, such as UAVs, which have no pilot. Unfortunately, GPS signals can be easily jammed or spoofed. For example, canyons and urban cities create an environment where the sky is obstructed and make GPS signals unreliable. Additionally, hostile individuals can transmit personal signals intended to block or spoof GPS signals. In these situations, it is important to find a means of navigation that doesn’t rely on GPS. Navigating without GPS means that other types of sensors or instruments must be used to replace the information lost from GPS. Some examples of additional sensors include cameras, altimeters, magnetometers, and radar. The work presented in this thesis shows how radar can be used to navigate without GPS. Specifically, synthetic aperture radar (SAR) is used, which is a method of processing radar data to form images of a landscape similar to images captured using a camera. SAR presents its own unique set of benefits and challenges. One major benefit of SAR is that it can produce images of an area even at night or through cloud cover. Additionally, SAR can image a wide swath of land at an angle that would be difficult for a camera to achieve. However, SAR is more computationally complex than other imaging sensors. Image quality is also highly dependent on the quality of navigation information available. In general, SAR requires that good navigation data be had in order to form SAR images. The research here explores the reverse problem where SAR images are formed without good navigation data and then good navigation data is inferred from the images. This thesis performs feasibility studies and real data implementations that show how SAR can be used in navigation without the presence of GPS. Derivations and background materials are provided. Validation methods and additional discussions are provided on the results of each portion of research

    Low-THz Automotive 3D Imaging Radar

    Get PDF
    This thesis lays out initial investigations into the 3D imaging capabilities of low-THz radar for automotive applications. This includes a discussion of the state of the art of automotive sensors, and the need for a robust, high-resolution imaging system to compliment and address the short-comings of these sensors. The unique capabilities of low-THz radar may prove to be well-suited to meet these needs, but they require 3D imaging algorithms which can exploit these capabilities effectively. One such unique feature is the extremely wide signal bandwidth, which yields a fine range resolution. This is a feature of low-THz radar which has not been discussed or properly investigated before, particularly in the context of generating the 3D position of an object from range information. The progress and experimental verification of these algorithms with a prototype multi-receiver 300GHz radar throughout this project are described; progressing from simple position estimation to highly detailed 3D radar imaging. The system is tested in a variety of different scenarios which a vehicle must be able to navigate, and the 3D imaging radar is compared with current automotive demonstrators experimentally

    Computational Imaging and Artificial Intelligence: The Next Revolution of Mobile Vision

    Full text link
    Signal capture stands in the forefront to perceive and understand the environment and thus imaging plays the pivotal role in mobile vision. Recent explosive progresses in Artificial Intelligence (AI) have shown great potential to develop advanced mobile platforms with new imaging devices. Traditional imaging systems based on the "capturing images first and processing afterwards" mechanism cannot meet this unprecedented demand. Differently, Computational Imaging (CI) systems are designed to capture high-dimensional data in an encoded manner to provide more information for mobile vision systems.Thanks to AI, CI can now be used in real systems by integrating deep learning algorithms into the mobile vision platform to achieve the closed loop of intelligent acquisition, processing and decision making, thus leading to the next revolution of mobile vision.Starting from the history of mobile vision using digital cameras, this work first introduces the advances of CI in diverse applications and then conducts a comprehensive review of current research topics combining CI and AI. Motivated by the fact that most existing studies only loosely connect CI and AI (usually using AI to improve the performance of CI and only limited works have deeply connected them), in this work, we propose a framework to deeply integrate CI and AI by using the example of self-driving vehicles with high-speed communication, edge computing and traffic planning. Finally, we outlook the future of CI plus AI by investigating new materials, brain science and new computing techniques to shed light on new directions of mobile vision systems

    Development and Characterization of a Chromotomosynthetic Hyperspectral Imaging System

    Get PDF
    A chromotomosynthetic imaging (CTI) methodology based upon mathematical reconstruction of a set of 2-D spectral projections to collect high-speed (100 Hz) 3-D hyperspectral data cube has been proposed. The CTI system can simultaneously provide usable 3-D spatial and spectral information, provide high-frame rate slitless 1-D spectra, and generate 2-D imagery equivalent to that collected with no prism in the optical system. The wavelength region where prism dispersion is highest (500 nm) is most sensitive to loss of spectral resolution in the presence of systematic error, while wavelengths 600 nm suffer mostly from a shift of the spectral peaks. The quality of the spectral resolution in the reconstructed hyperspectral imagery was degraded by as much as a factor of two in the blue spectral region with less than 1° total angular error in mount alignment in the two axes of freedom. Even with no systematic error, spatial artifacts from the reconstruction limit the ability to provide adequate spectral imagery without specialized image reconstruction techniques as targets become more spatially and spectrally uniform

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Novel Hybrid-Learning Algorithms for Improved Millimeter-Wave Imaging Systems

    Full text link
    Increasing attention is being paid to millimeter-wave (mmWave), 30 GHz to 300 GHz, and terahertz (THz), 300 GHz to 10 THz, sensing applications including security sensing, industrial packaging, medical imaging, and non-destructive testing. Traditional methods for perception and imaging are challenged by novel data-driven algorithms that offer improved resolution, localization, and detection rates. Over the past decade, deep learning technology has garnered substantial popularity, particularly in perception and computer vision applications. Whereas conventional signal processing techniques are more easily generalized to various applications, hybrid approaches where signal processing and learning-based algorithms are interleaved pose a promising compromise between performance and generalizability. Furthermore, such hybrid algorithms improve model training by leveraging the known characteristics of radio frequency (RF) waveforms, thus yielding more efficiently trained deep learning algorithms and offering higher performance than conventional methods. This dissertation introduces novel hybrid-learning algorithms for improved mmWave imaging systems applicable to a host of problems in perception and sensing. Various problem spaces are explored, including static and dynamic gesture classification; precise hand localization for human computer interaction; high-resolution near-field mmWave imaging using forward synthetic aperture radar (SAR); SAR under irregular scanning geometries; mmWave image super-resolution using deep neural network (DNN) and Vision Transformer (ViT) architectures; and data-level multiband radar fusion using a novel hybrid-learning architecture. Furthermore, we introduce several novel approaches for deep learning model training and dataset synthesis.Comment: PhD Dissertation Submitted to UTD ECE Departmen

    Coherent Change Detection Under a Forest Canopy

    Get PDF
    Coherent change detection (CCD) is an established technique for remotely monitoring landscapes with minimal vegetation or buildings. By evaluating the local complex correlation between a pair of synthetic aperture radar (SAR) images acquired on repeat passes of an airborne or spaceborne imaging radar system, a map of the scene coherence is obtained. Subtle disturbances of the ground are detected as areas of low coherence in the surface clutter. This thesis investigates extending CCD to monitor the ground in a forest. It is formulated as a multichannel dual-layer coherence estimation problem, where the coherence of scattering from the ground is estimated after suppressing interference from the canopy by vertically beamforming multiple image channels acquired at slightly different grazing angles on each pass. This 3D SAR beamforming must preserve the phase of the ground response. The choice of operating wavelength is considered in terms of the trade-off between foliage penetration and change sensitivity. A framework for comparing the performance of different radar designs and beamforming algorithms, as well as assessing the sensitivity to error, is built around the random-volume-over-ground (RVOG) model of forest scattering. If the ground and volume scattering contributions in the received echo are of similar strength, it is shown that an L-band array of just three channels can provide enough volume attenuation to permit reasonable estimation of the ground coherence. The proposed method is demonstrated using an RVOG clutter simulation and a modified version of the physics-based SAR image simulator PolSARproSim. Receiver operating characteristics show that whilst ordinary single-channel CCD is unusable when a canopy is present, 3D SAR CCD permits reasonable detection performance. A novel polarimetric filtering algorithm is also proposed to remove contributions from the ground-trunk double-bounce scattering mechanism, which may mask changes on the ground near trees. To enable this kind of polarimetric processing, fully polarimetric data must be acquired and calibrated. Motivated by an interim version of the Ingara airborne imaging radar, which used a pair of helical antennas to acquire circularly polarised data, techniques for the estimation of polarimetric distortion in the circular basis are investigated. It is shown that the standard approach to estimating cross-talk in the linear basis, whereby expressions for the distortion of reflection-symmetric clutter are linearised and solved, cannot be adapted to the circular basis, because the first-order effects of individual cross-talk parameters cannot be distinguished. An alternative approach is proposed that uses ordinary and gridded trihedral corner reflectors, and optionally dihedrals, to iteratively estimate the channel imbalance and cross-talk parameters. Monte Carlo simulations show that the method reliably converges to the true parameter values. Ingara data is calibrated using the method, with broadly consistent parameter estimates obtained across flights. Genuine scene changes may be masked by coherence loss that arises when the bands of spatial frequencies supported by the two passes do not match. Trimming the spatial-frequency bands to their common area of support would remove these uncorrelated contributions, but the bands, and therefore the required trim, depend on the effective collection geometry at each pixel position. The precise dependence on local slope and collection geometry is derived in this thesis. Standard methods of SAR image formation use a flat focal plane and allow only a single global trim, which leads to spatially varying coherence loss when the terrain is undulating. An image-formation algorithm is detailed that exploits the flexibility offered by back-projection not only to focus the image onto a surface matched to the scene topography but also to allow spatially adaptive trimming. Improved coherence is demonstrated in simulation and using data from two airborne radar systems.Thesis (Ph.D.) -- University of Adelaide, School of Electrical & Electronic Engineering, 202
    corecore