446 research outputs found
Synthetic Radar Dataset Generator for Macro-Gesture Recognition
Recent developments in mmWave technology allow the detection and classification of dynamic arm gestures. However, achieving a high accuracy and generalization requires a lot of samples for the training of a machine learning model. Furthermore, in order to capture variability in the gesture class, the participation of many subjects and the conduct of many gestures with different arm speed are required. In case of macro-gestures, the position of the subject must also vary inside the field of view of the device. This would require a significant amount of time and effort, which needs to be repeated in case that the sensor hardware or the modulation parameters are modified. In order to reduce the required manual effort, here we developed a synthetic data generator that is capable of simulating seven arm gestures by utilizing Blender, an open-source 3D creation suite. We used it to generate 600 artificial samples with varying speed of execution and relative position of the simulated subject, and used them to train a machine learning model. We tested the model using a real dataset recorded from ten subjects, using an experimental sensor. The test set yielded 84.2% accuracy, indicating that synthetic data generation can significantly contribute in the pre-training of a model
Multi-User Gesture Recognition with Radar Technology
The aim of this work is the development of a Radar system for consumer applications. It is capable of tracking multiple people in a room and offers a touchless human-machine interface for purposes that range from entertainment to hygiene
Emerging Approaches for THz Array Imaging: A Tutorial Review and Software Tool
Accelerated by the increasing attention drawn by 5G, 6G, and Internet of
Things applications, communication and sensing technologies have rapidly
evolved from millimeter-wave (mmWave) to terahertz (THz) in recent years.
Enabled by significant advancements in electromagnetic (EM) hardware, mmWave
and THz frequency regimes spanning 30 GHz to 300 GHz and 300 GHz to 3000 GHz,
respectively, can be employed for a host of applications. The main feature of
THz systems is high-bandwidth transmission, enabling ultra-high-resolution
imaging and high-throughput communications; however, challenges in both the
hardware and algorithmic arenas remain for the ubiquitous adoption of THz
technology. Spectra comprising mmWave and THz frequencies are well-suited for
synthetic aperture radar (SAR) imaging at sub-millimeter resolutions for a wide
spectrum of tasks like material characterization and nondestructive testing
(NDT). This article provides a tutorial review of systems and algorithms for
THz SAR in the near-field with an emphasis on emerging algorithms that combine
signal processing and machine learning techniques. As part of this study, an
overview of classical and data-driven THz SAR algorithms is provided, focusing
on object detection for security applications and SAR image super-resolution.
We also discuss relevant issues, challenges, and future research directions for
emerging algorithms and THz SAR, including standardization of system and
algorithm benchmarking, adoption of state-of-the-art deep learning techniques,
signal processing-optimized machine learning, and hybrid data-driven signal
processing algorithms...Comment: Submitted to Proceedings of IEE
Multi-User Gesture Recognition with Radar Technology
The aim of this work is the development of a Radar system for consumer applications. It is capable of tracking multiple people in a room and offers a touchless human-machine interface for purposes that range from entertainment to hygiene
Novel Hybrid-Learning Algorithms for Improved Millimeter-Wave Imaging Systems
Increasing attention is being paid to millimeter-wave (mmWave), 30 GHz to 300
GHz, and terahertz (THz), 300 GHz to 10 THz, sensing applications including
security sensing, industrial packaging, medical imaging, and non-destructive
testing. Traditional methods for perception and imaging are challenged by novel
data-driven algorithms that offer improved resolution, localization, and
detection rates. Over the past decade, deep learning technology has garnered
substantial popularity, particularly in perception and computer vision
applications. Whereas conventional signal processing techniques are more easily
generalized to various applications, hybrid approaches where signal processing
and learning-based algorithms are interleaved pose a promising compromise
between performance and generalizability. Furthermore, such hybrid algorithms
improve model training by leveraging the known characteristics of radio
frequency (RF) waveforms, thus yielding more efficiently trained deep learning
algorithms and offering higher performance than conventional methods. This
dissertation introduces novel hybrid-learning algorithms for improved mmWave
imaging systems applicable to a host of problems in perception and sensing.
Various problem spaces are explored, including static and dynamic gesture
classification; precise hand localization for human computer interaction;
high-resolution near-field mmWave imaging using forward synthetic aperture
radar (SAR); SAR under irregular scanning geometries; mmWave image
super-resolution using deep neural network (DNN) and Vision Transformer (ViT)
architectures; and data-level multiband radar fusion using a novel
hybrid-learning architecture. Furthermore, we introduce several novel
approaches for deep learning model training and dataset synthesis.Comment: PhD Dissertation Submitted to UTD ECE Departmen
Computational Imaging and Artificial Intelligence: The Next Revolution of Mobile Vision
Signal capture stands in the forefront to perceive and understand the
environment and thus imaging plays the pivotal role in mobile vision. Recent
explosive progresses in Artificial Intelligence (AI) have shown great potential
to develop advanced mobile platforms with new imaging devices. Traditional
imaging systems based on the "capturing images first and processing afterwards"
mechanism cannot meet this unprecedented demand. Differently, Computational
Imaging (CI) systems are designed to capture high-dimensional data in an
encoded manner to provide more information for mobile vision systems.Thanks to
AI, CI can now be used in real systems by integrating deep learning algorithms
into the mobile vision platform to achieve the closed loop of intelligent
acquisition, processing and decision making, thus leading to the next
revolution of mobile vision.Starting from the history of mobile vision using
digital cameras, this work first introduces the advances of CI in diverse
applications and then conducts a comprehensive review of current research
topics combining CI and AI. Motivated by the fact that most existing studies
only loosely connect CI and AI (usually using AI to improve the performance of
CI and only limited works have deeply connected them), in this work, we propose
a framework to deeply integrate CI and AI by using the example of self-driving
vehicles with high-speed communication, edge computing and traffic planning.
Finally, we outlook the future of CI plus AI by investigating new materials,
brain science and new computing techniques to shed light on new directions of
mobile vision systems
Jump Particle Filtering Framework for Joint Target Tracking and Intent Recognition
This paper presents a Bayesian framework for inferring the posterior of the
extended state of a target, incorporating its underlying goal or intent, such
as any intermediate waypoints and/or final destination. The methodology is thus
for joint tracking and intent recognition. Several novel latent intent models
are proposed here within a virtual leader formulation. They capture the
influence of the target's hidden goal on its instantaneous behaviour. In this
context, various motion models, including for highly maneuvering objects, are
also considered. The a priori unknown target intent (e.g. destination) can
dynamically change over time and take any value within the state space (e.g. a
location or spatial region). A sequential Monte Carlo (particle filtering)
approach is introduced for the simultaneous estimation of the target's
(kinematic) state and its intent. Rao-Blackwellisation is employed to enhance
the statistical performance of the inference routine. Simulated data and real
radar measurements are used to demonstrate the efficacy of the proposed
techniques.Comment: Submitted to IEEE Transactions on Aerospace and Electronic Systems
(T-AES
Integrated Sensing and Communications: Towards Dual-functional Wireless Networks for 6G and Beyond
As the standardization of 5G solidifies, researchers are speculating what 6G will be. The integration of sensing functionality is emerging as a key feature of the 6G Radio Access Network (RAN), allowing for the exploitation of dense cell infrastructures to construct a perceptive network. In this IEEE Journal on Selected Areas in Commmunications (JSAC) Special Issue overview, we provide a comprehensive review on the background, range of key applications and state-of-the-art approaches of Integrated Sensing and Communications (ISAC). We commence by discussing the interplay between sensing and communications (S&C) from a historical point of view, and then consider the multiple facets of ISAC and the resulting performance gains. By introducing both ongoing and potential use cases, we shed light on the industrial progress and standardization activities related to ISAC. We analyze a number of performance tradeoffs between S&C, spanning from information theoretical limits to physical layer performance tradeoffs, and the cross-layer design tradeoffs. Next, we discuss the signal processing aspects of ISAC, namely ISAC waveform design and receive signal processing. As a step further, we provide our vision on the deeper integration between S&C within the framework of perceptive networks, where the two functionalities are expected to mutually assist each other, i.e., via communication-assisted sensing and sensing-assisted communications. Finally, we identify the potential integration of ISAC with other emerging communication technologies, and their positive impacts on the future of wireless networks
A Review of Indoor Millimeter Wave Device-based Localization and Device-free Sensing Technologies and Applications
The commercial availability of low-cost millimeter wave (mmWave)
communication and radar devices is starting to improve the penetration of such
technologies in consumer markets, paving the way for large-scale and dense
deployments in fifth-generation (5G)-and-beyond as well as 6G networks. At the
same time, pervasive mmWave access will enable device localization and
device-free sensing with unprecedented accuracy, especially with respect to
sub-6 GHz commercial-grade devices. This paper surveys the state of the art in
device-based localization and device-free sensing using mmWave communication
and radar devices, with a focus on indoor deployments. We first overview key
concepts about mmWave signal propagation and system design. Then, we provide a
detailed account of approaches and algorithms for localization and sensing
enabled by mmWaves. We consider several dimensions in our analysis, including
the main objectives, techniques, and performance of each work, whether each
research reached some degree of implementation, and which hardware platforms
were used for this purpose. We conclude by discussing that better algorithms
for consumer-grade devices, data fusion methods for dense deployments, as well
as an educated application of machine learning methods are promising, relevant
and timely research directions.Comment: 43 pages, 13 figures. Accepted in IEEE Communications Surveys &
Tutorials (IEEE COMST
- …