85 research outputs found
Towards Large-scale Single-shot Millimeter-wave Imaging for Low-cost Security Inspection
Millimeter-wave (MMW) imaging is emerging as a promising technique for safe
security inspection. It achieves a delicate balance between imaging resolution,
penetrability and human safety, resulting in higher resolution compared to
low-frequency microwave, stronger penetrability compared to visible light, and
stronger safety compared to X ray. Despite of recent advance in the last
decades, the high cost of requisite large-scale antenna array hinders
widespread adoption of MMW imaging in practice. To tackle this challenge, we
report a large-scale single-shot MMW imaging framework using sparse antenna
array, achieving low-cost but high-fidelity security inspection under an
interpretable learning scheme. We first collected extensive full-sampled MMW
echoes to study the statistical ranking of each element in the large-scale
array. These elements are then sampled based on the ranking, building the
experimentally optimal sparse sampling strategy that reduces the cost of
antenna array by up to one order of magnitude. Additionally, we derived an
untrained interpretable learning scheme, which realizes robust and accurate
image reconstruction from sparsely sampled echoes. Last, we developed a neural
network for automatic object detection, and experimentally demonstrated
successful detection of concealed centimeter-sized targets using 10% sparse
array, whereas all the other contemporary approaches failed at the same sample
sampling ratio. The performance of the reported technique presents higher than
50% superiority over the existing MMW imaging schemes on various metrics
including precision, recall, and mAP50. With such strong detection ability and
order-of-magnitude cost reduction, we anticipate that this technique provides a
practical way for large-scale single-shot MMW imaging, and could advocate its
further practical applications
Emerging Approaches for THz Array Imaging: A Tutorial Review and Software Tool
Accelerated by the increasing attention drawn by 5G, 6G, and Internet of
Things applications, communication and sensing technologies have rapidly
evolved from millimeter-wave (mmWave) to terahertz (THz) in recent years.
Enabled by significant advancements in electromagnetic (EM) hardware, mmWave
and THz frequency regimes spanning 30 GHz to 300 GHz and 300 GHz to 3000 GHz,
respectively, can be employed for a host of applications. The main feature of
THz systems is high-bandwidth transmission, enabling ultra-high-resolution
imaging and high-throughput communications; however, challenges in both the
hardware and algorithmic arenas remain for the ubiquitous adoption of THz
technology. Spectra comprising mmWave and THz frequencies are well-suited for
synthetic aperture radar (SAR) imaging at sub-millimeter resolutions for a wide
spectrum of tasks like material characterization and nondestructive testing
(NDT). This article provides a tutorial review of systems and algorithms for
THz SAR in the near-field with an emphasis on emerging algorithms that combine
signal processing and machine learning techniques. As part of this study, an
overview of classical and data-driven THz SAR algorithms is provided, focusing
on object detection for security applications and SAR image super-resolution.
We also discuss relevant issues, challenges, and future research directions for
emerging algorithms and THz SAR, including standardization of system and
algorithm benchmarking, adoption of state-of-the-art deep learning techniques,
signal processing-optimized machine learning, and hybrid data-driven signal
processing algorithms...Comment: Submitted to Proceedings of IEE
Sensor Fusion for Identification of Freezing of Gait Episodes Using Wi-Fi and Radar Imaging
Parkinson’s disease (PD) is a progressive and neurodegenerative condition causing motor impairments. One of the major motor related impairments that present biggest challenge is freezing of gait (FOG) in Parkinson’s patients. In FOG episode, the patient is unable to initiate, control or sustain a gait that consequently affects the Activities of Daily Livings (ADLs) and increases the occurrence of critical events such as falls. This paper presents continuous monitoring ADLs and classification freezing of gait episodes using Wi-Fi and radar imaging. The idea is to exploit the multi-resolution scalograms generated by channel state information (CSI) imprint and micro-Doppler signatures produced by reflected radar signal. A total of 120 volunteers took part in experimental campaign and were asked to perform different activities including walking fast, walking slow, voluntary stop, sitting down & stand up and freezing of gait. Two neural networks namely Autoencoder and a proposed enhanced Autoencoder were used classify ADLs and FOG episodes using data fusion process by combining the images acquired from both sensing techniques. The Autoencoder provided overall classification accuracy of ~87% for combined datasets. The proposed algorithm provided significantly better results by presenting an overall accuracy of ~98% using data fusion
Forum Bildverarbeitung 2022
Bildverarbeitung verknüpft das Fachgebiet die Sensorik von Kameras – bildgebender Sensorik – mit der Verarbeitung der Sensordaten – den Bildern. Daraus resultiert der besondere Reiz dieser Disziplin. Der vorliegende Tagungsband des „Forums Bildverarbeitung“, das am 24. und 25.11.2022 in Karlsruhe als Veranstaltung des Karlsruher Instituts für Technologie und des Fraunhofer-Instituts für Optronik, Systemtechnik und Bildauswertung stattfand, enthält die Aufsätze der eingegangenen Beiträge
Low-power neuromorphic sensor fusion for elderly care
Smart wearable systems have become a necessary part of our daily life with applications ranging from entertainment to healthcare. In the wearable healthcare domain, the development of wearable fall recognition bracelets based on embedded systems is getting considerable attention in the market. However, in embedded low-power scenarios, the sensor’s signal processing has propelled more challenges for the machine learning algorithm. Traditional machine learning method has a huge number of calculations on the data classification, and it is difficult to implement real-time signal processing in low-power embedded systems. In an embedded system, ensuring data classification in a low-power and real-time processing to fuse a variety of sensor signals is a huge challenge. This requires the introduction of neuromorphic computing with software and hardware co-design concept of the system. This thesis is aimed to review various neuromorphic computing algorithms, research hardware circuits feasibility, and then integrate captured sensor data to realise data classification applications. In addition, it has explored a human being benchmark dataset, which is following defined different levels to design the activities classification task. In this study, firstly the data classification algorithm is applied to human movement sensors to validate the neuromorphic computing on human activity recognition tasks. Secondly, a data fusion framework has been presented, it implements multiple-sensing signals to help neuromorphic computing achieve sensor fusion results and improve classification accuracy. Thirdly, an analog circuits module design to carry out a neural network algorithm to achieve low power and real-time processing hardware has been proposed. It shows a hardware/software co-design system to combine the above work. By adopting the multi-sensing signals on the embedded system, the designed software-based feature extraction method will help to fuse various sensors data as an input to help neuromorphic computing hardware. Finally, the results show that the classification accuracy of neuromorphic computing data fusion framework is higher than that of traditional machine learning and deep neural network, which can reach 98.9% accuracy. Moreover, this framework can flexibly combine acquisition hardware signals and is not limited to single sensor data, and can use multi-sensing information to help the algorithm obtain better stability
Using Radio Frequency and Motion Sensing to Improve Camera Sensor Systems
Camera-based sensor systems have advanced significantly in recent years. This advancement is a combination of camera CMOS (complementary metal-oxide-semiconductor) hardware technology improvement and new computer vision (CV) algorithms that can better process the rich information captured. As the world becoming more connected and digitized through increased deployment of various sensors, cameras have become a cost-effective solution with the advantages of small sensor size, intuitive sensing results, rich visual information, and neural network-friendly. The increased deployment and advantages of camera-based sensor systems have fueled applications such as surveillance, object detection, person re-identification, scene reconstruction, visual tracking, pose estimation, and localization. However, camera-based sensor systems have fundamental limitations such as extreme power consumption, privacy-intrusive, and inability to see-through obstacles and other non-ideal visual conditions such as darkness, smoke, and fog. In this dissertation, we aim to improve the capability and performance of camera-based sensor systems by utilizing additional sensing modalities such as commodity WiFi and mmWave (millimeter wave) radios, and ultra-low-power and low-cost sensors such as inertial measurement units (IMU). In particular, we set out to study three problems: (1) power and storage consumption of continuous-vision wearable cameras, (2) human presence detection, localization, and re-identification in both indoor and outdoor spaces, and (3) augmenting the sensing capability of camera-based systems in non-ideal situations. We propose to use an ultra-low-power, low-cost IMU sensor, along with readily available camera information, to solve the first problem. WiFi devices will be utilized in the second problem, where our goal is to reduce the hardware deployment cost and leverage existing WiFi infrastructure as much as possible. Finally, we will use a low-cost, off-the-shelf mmWave radar to extend the sensing capability of a camera in non-ideal visual sensing situations.Doctor of Philosoph
Multi-Modal 3D Object Detection in Autonomous Driving: a Survey
In the past few years, we have witnessed rapid development of autonomous
driving. However, achieving full autonomy remains a daunting task due to the
complex and dynamic driving environment. As a result, self-driving cars are
equipped with a suite of sensors to conduct robust and accurate environment
perception. As the number and type of sensors keep increasing, combining them
for better perception is becoming a natural trend. So far, there has been no
indepth review that focuses on multi-sensor fusion based perception. To bridge
this gap and motivate future research, this survey devotes to review recent
fusion-based 3D detection deep learning models that leverage multiple sensor
data sources, especially cameras and LiDARs. In this survey, we first introduce
the background of popular sensors for autonomous cars, including their common
data representations as well as object detection networks developed for each
type of sensor data. Next, we discuss some popular datasets for multi-modal 3D
object detection, with a special focus on the sensor data included in each
dataset. Then we present in-depth reviews of recent multi-modal 3D detection
networks by considering the following three aspects of the fusion: fusion
location, fusion data representation, and fusion granularity. After a detailed
review, we discuss open challenges and point out possible solutions. We hope
that our detailed review can help researchers to embark investigations in the
area of multi-modal 3D object detection
Sensor Signal and Information Processing II
In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
- …