7 research outputs found
An Adversarial Super-Resolution Remedy for Radar Design Trade-offs
Radar is of vital importance in many fields, such as autonomous driving,
safety and surveillance applications. However, it suffers from stringent
constraints on its design parametrization leading to multiple trade-offs. For
example, the bandwidth in FMCW radars is inversely proportional with both the
maximum unambiguous range and range resolution. In this work, we introduce a
new method for circumventing radar design trade-offs. We propose the use of
recent advances in computer vision, more specifically generative adversarial
networks (GANs), to enhance low-resolution radar acquisitions into higher
resolution counterparts while maintaining the advantages of the low-resolution
parametrization. The capability of the proposed method was evaluated on the
velocity resolution and range-azimuth trade-offs in micro-Doppler signatures
and FMCW uniform linear array (ULA) radars, respectively.Comment: Accepted in EUSIPCO 2019, 5 page
AeGAN: Time-Frequency Speech Denoising via Generative Adversarial Networks
Automatic speech recognition (ASR) systems are of vital importance nowadays
in commonplace tasks such as speech-to-text processing and language
translation. This created the need for an ASR system that can operate in
realistic crowded environments. Thus, speech enhancement is a valuable building
block in ASR systems and other applications such as hearing aids, smartphones
and teleconferencing systems. In this paper, a generative adversarial network
(GAN) based framework is investigated for the task of speech enhancement, more
specifically speech denoising of audio tracks. A new architecture based on
CasNet generator and an additional feature-based loss are incorporated to get
realistically denoised speech phonetics. Finally, the proposed framework is
shown to outperform other learning and traditional model-based speech
enhancement approaches.Comment: 5 pages, 4 figures and 2 Tables. Accepted in EUSIPCO 202
Ground Weather RADAR Signal Characterization through Application of Convolutional Neural Networks
The 45th Weather Squadron supports the space launch efforts out of the Kennedy Space Center and Cape Canaveral Air Force Station for the Department of Defense, NASA, and commercial customers through weather assessments. Their assessment of the Lightning Launch Commit Criteria (LLCC) for avoidance of natural and rocket triggered lightning to launch vehicles is critical in approving space shuttle and rocket launches. The LLCC includes standards for cloud formations, which requires proper cloud identification and characterization methods. Accurate reflectivity measurements for ground weather radar are important to meet the LLCC for rocket triggered lightning. Current linear interpolation methods for ground weather radar gaps result in over-smoothing of the vertical gradient and over-estimate the risk of rocket triggered lightning, potentially resulting in costly, unnecessarily delayed launches. This research explores the application of existing interpolation methods using convolutional neural networks to perform two-dimensional image interpolation, called inpainting, into the three-dimensional weather radar scan domain. Results demonstrate that convolutional neural networks can improve the accuracy of cloud characterization over current interpolation methods, potentially resulting in fewer launch delays with substantial associated cost savings due to increased capability to meet the LLCC
Coherent, super resolved radar beamforming using self-supervised learning
High resolution automotive radar sensors are required in order to meet the
high bar of autonomous vehicles needs and regulations. However, current radar
systems are limited in their angular resolution causing a technological gap. An
industry and academic trend to improve angular resolution by increasing the
number of physical channels, also increases system complexity, requires
sensitive calibration processes, lowers robustness to hardware malfunctions and
drives higher costs. We offer an alternative approach, named Radar signal
Reconstruction using Self Supervision (R2-S2), which significantly improves the
angular resolution of a given radar array without increasing the number of
physical channels. R2-S2 is a family of algorithms which use a Deep Neural
Network (DNN) with complex range-Doppler radar data as input and trained in a
self-supervised method using a loss function which operates in multiple data
representation spaces. Improvement of 4x in angular resolution was demonstrated
using a real-world dataset collected in urban and highway environments during
clear and rainy weather conditions.Comment: 28 pages 10 figure
Novel Hybrid-Learning Algorithms for Improved Millimeter-Wave Imaging Systems
Increasing attention is being paid to millimeter-wave (mmWave), 30 GHz to 300
GHz, and terahertz (THz), 300 GHz to 10 THz, sensing applications including
security sensing, industrial packaging, medical imaging, and non-destructive
testing. Traditional methods for perception and imaging are challenged by novel
data-driven algorithms that offer improved resolution, localization, and
detection rates. Over the past decade, deep learning technology has garnered
substantial popularity, particularly in perception and computer vision
applications. Whereas conventional signal processing techniques are more easily
generalized to various applications, hybrid approaches where signal processing
and learning-based algorithms are interleaved pose a promising compromise
between performance and generalizability. Furthermore, such hybrid algorithms
improve model training by leveraging the known characteristics of radio
frequency (RF) waveforms, thus yielding more efficiently trained deep learning
algorithms and offering higher performance than conventional methods. This
dissertation introduces novel hybrid-learning algorithms for improved mmWave
imaging systems applicable to a host of problems in perception and sensing.
Various problem spaces are explored, including static and dynamic gesture
classification; precise hand localization for human computer interaction;
high-resolution near-field mmWave imaging using forward synthetic aperture
radar (SAR); SAR under irregular scanning geometries; mmWave image
super-resolution using deep neural network (DNN) and Vision Transformer (ViT)
architectures; and data-level multiband radar fusion using a novel
hybrid-learning architecture. Furthermore, we introduce several novel
approaches for deep learning model training and dataset synthesis.Comment: PhD Dissertation Submitted to UTD ECE Departmen