117 research outputs found
Automotive radar target detection using ambiguity function
The risk of collision increases, as the number of cars on the road increases. Automotive radar is an important way to improve road traffic safety and provide driver assistance. Adaptive cruise control, parking aid, pre-crash warning etc. are some of the applications of automotive radar which are already in use in many luxury cars today.
In automotive radar a commonly used modulation waveform is the linear frequency modulated continuous waveform (FMCW); the return signal contains the range and velocity information about the target related through the beat frequency equation. Existing techniques retrieve target information by applying a threshold to the Fourier power spectrum of the returned signal, to eliminate weak responses. This method has a risk of missing a target in a multi-target situation if its response falls below the threshold. It is also common to use multiple wide angle radar sensors to cover a wider angle of observation. This results in detecting a large number of targets. The ranges and velocities of targets in automotive applications create ambiguity which is heightened by the large number of responses received from wide angle set of sensors.
This thesis reports a novel strategy to resolve the range-velocity ambiguity in the interpretation of FMCW radar returns that is suitable for use in automotive radar. The radar ambiguity function is used in a novel way with the beat frequency equation relating range and velocity to interpret radar responses. This strategy avoids applying a threshold to the amplitude of the Fourier spectrum of the radar return.
This novel radar interpretation strategy is assessed by a simulation which demonstrates that targets can be detected and their range and velocity estimated without ambiguity using the combined information from the radar returns and existing radar ambiguity function
Long-Term Localization for Self-Driving Cars
Long-term localization is hard due to changing conditions, while relative localization within time sequences is much easier. To achieve long-term localization in a sequential setting, such as, for self-driving cars, relative localization should be used to the fullest extent, whenever possible.This thesis presents solutions and insights both for long-term sequential visual localization, and localization using global navigational satellite systems (GNSS), that push us closer to the goal of accurate and reliable localization for self-driving cars. It addresses the question: How to achieve accurate and robust, yet cost-effective long-term localization for self-driving cars?Starting in this question, the thesis explores how existing sensor suites for advanced driver-assistance systems (ADAS) can be used most efficiently, and how landmarks in maps can be recognized and used for localization even after severe changes in appearance. The findings show that:* State-of-the-art ADAS sensors are insufficient to meet the requirements for localization of a self-driving car in less than ideal conditions.GNSS and visual localization are identified as areas to improve.\ua0* Highly accurate relative localization with no convergence delay is possible by using time relative GNSS observations with a single band receiver, and no base stations.\ua0* Sequential semantic localization is identified as a promising focus point for further research based on a benchmark study comparing state-of-the-art visual localization methods in challenging autonomous driving scenarios including day-to-night and seasonal changes.\ua0* A novel sequential semantic localization algorithm improves accuracy while significantly reducing map size compared to traditional methods based on matching of local image features.\ua0* Improvements for semantic segmentation in challenging conditions can be made efficiently by automatically generating pixel correspondences between images from a multitude of conditions and enforcing a consistency constraint during training.\ua0* A segmentation algorithm with automatically defined and more fine-grained classes improves localization performance.\ua0* The performance advantage seen in single image localization for modern local image features, when compared to traditional ones, is all but erased when considering sequential data with odometry, thus, encouraging to focus future research more on sequential localization, rather than pure single image localization
Improving Accuracy in Ultra-Wideband Indoor Position Tracking through Noise Modeling and Augmentation
The goal of this research is to improve the precision in tracking of an ultra-wideband (UWB) based Local Positioning System (LPS). This work is motivated by the approach taken to improve the accuracies in the Global Positioning System (GPS), through noise modeling and augmentation. Since UWB indoor position tracking is accomplished using methods similar to that of the GPS, the same two general approaches can be used to improve accuracy. Trilateration calculations are affected by errors in distance measurements from the set of fixed points to the object of interest. When these errors are systemic, each distinct set of fixed points can be said to exhibit a unique set noise. For UWB indoor position tracking, the set of fixed points is a set of sensors measuring the distance to a tracked tag. In this work we develop a noise model for this sensor set noise, along with a particle filter that uses our set noise model. To the author\u27s knowledge, this noise has not been identified and modeled for an LPS. We test our methods on a commercially available UWB system in a real world setting. From the results we observe approximately 15% improvement in accuracy over raw UWB measurements. The UWB system is an example of an aided sensor since it requires a person to carry a device which continuously broadcasts its identity to determine its location. Therefore the location of each user is uniquely known even when there are multiple users present. However, it suffers from limited precision as compared to some unaided sensors such as a camera which typically are placed line of sight (LOS). An unaided system does not require active participation from people. Therefore it has more difficulty in uniquely identifying the location of each person when there are a large number of people present in the tracking area. Therefore we develop a generalized fusion framework to combine measurements from aided and unaided systems to improve the tracking precision of the aided system and solve data association issues in the unaided system. The framework uses a Kalman filter to fuse measurements from multiple sensors. We test our approach on two unaided sensor systems: Light Detection And Ranging (LADAR) and a camera system. Our study investigates the impact of increasing the number of people in an indoor environment on the accuracies using a proposed fusion framework. From the results we observed that depending on the type of unaided sensor system used for augmentation, the improvement in precision ranged from 6-25% for up to 3 people
Target Tracking in UWB Multistatic Radars
Detection, localization and tracking of non-collaborative objects moving inside an area is of great interest to many surveillance applications. An ultra-
wideband (UWB) multistatic radar is considered as a good infrastructure
for such anti-intruder systems, due to the high range resolution provided by
the UWB impulse-radio and the spatial diversity achieved with a multistatic
configuration.
Detection of targets, which are typically human beings, is a challenging
task due to reflections from unwanted objects in the area, shadowing, antenna
cross-talks, low transmit power, and the blind zones arised from intrinsic peculiarities of UWB multistatic radars.
Hence, we propose more effective detection, localization, as well as clutter
removal techniques for these systems. However, the majority of the thesis
effort is devoted to the tracking phase, which is an essential part for improving
the localization accuracy, predicting the target position and filling out the
missed detections.
Since UWB radars are not linear Gaussian systems, the widely used tracking filters, such as the Kalman filter, are not expected to provide a satisfactory performance. Thus, we propose the Bayesian filter as an appropriate
candidate for UWB radars. In particular, we develop tracking algorithms
based on particle filtering, which is the most common approximation of
Bayesian filtering, for both single and multiple target scenarios. Also, we
propose some effective detection and tracking algorithms based on image
processing tools.
We evaluate the performance of our proposed approaches by numerical
simulations. Moreover, we provide experimental results by channel measurements for tracking a person walking in an indoor area, with the presence of a
significant clutter. We discuss the existing practical issues and address them by proposing more robust algorithms
Smart Sensor Technologies for IoT
The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, “Smart Sensor Technologies for IoT” aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT
Cognitive radar network design and applications
PhD ThesisIn recent years, several emerging technologies in modern radar system
design are attracting the attention of radar researchers and practitioners
alike, noteworthy among which are multiple-input multiple-output
(MIMO), ultra wideband (UWB) and joint communication-radar technologies.
This thesis, in particular focuses upon a cognitive approach
to design these modern radars. In the existing literature, these technologies
have been implemented on a traditional platform in which the
transmitter and receiver subsystems are discrete and do not exchange
vital radar scene information. Although such radar architectures benefit
from these mentioned technological advances, their performance remains
sub-optimal due to the lack of exchange of dynamic radar scene
information between the subsystems. Consequently, such systems are
not capable to adapt their operational parameters “on the fly”, which
is in accordance with the dynamic radar environment. This thesis explores
the research gap of evaluating cognitive mechanisms, which could
enable modern radars to adapt their operational parameters like waveform,
power and spectrum by continually learning about the radar scene
through constant interactions with the environment and exchanging this
information between the radar transmitter and receiver. The cognitive
feedback between the receiver and transmitter subsystems is the facilitator
of intelligence for this type of architecture.
In this thesis, the cognitive architecture is fused together with modern
radar systems like MIMO, UWB and joint communication-radar designs
to achieve significant performance improvement in terms of target parameter
extraction. Specifically, in the context of MIMO radar, a novel
cognitive waveform optimization approach has been developed which facilitates
enhanced target signature extraction. In terms of UWB radar
system design, a novel cognitive illumination and target tracking algorithm
for target parameter extraction in indoor scenarios has been developed.
A cognitive system architecture and waveform design algorithm
has been proposed for joint communication-radar systems. This thesis
also explores the development of cognitive dynamic systems that allows
the fusion of cognitive radar and cognitive radio paradigms for optimal
resources allocation in wireless networks. In summary, the thesis provides
a theoretical framework for implementing cognitive mechanisms in
modern radar system design. Through such a novel approach, intelligent
illumination strategies could be devised, which enable the adaptation of
radar operational modes in accordance with the target scene variations
in real time. This leads to the development of radar systems which are
better aware of their surroundings and are able to quickly adapt to the
target scene variations in real time.Newcastle University, Newcastle upon Tyne:
University of Greenwich
Three-D multilateration: A precision geodetic measurement system
A technique of satellite geodesy for determining the relative three dimensional coordinates of ground stations within one centimeter over baselines of 20 to 10,000 kilometers is discussed. The system is referred to as 3-D Multilateration and has applications in earthquake hazard assessment, precision surveying, plate tectonics, and orbital mechanics. The accuracy is obtained by using pulsed lasers to obtain simultaneous slant ranges between several ground stations and a moving retroreflector with known trajectory for aiming the lasers
Loosely Coupled Odometry, UWB Ranging, and Cooperative Spatial Detection for Relative Monte-Carlo Multi-Robot Localization
As mobile robots become more ubiquitous, their deployments grow across use
cases where GNSS positioning is either unavailable or unreliable. This has led
to increased interest in multi-modal relative localization methods.
Complementing onboard odometry, ranging allows for relative state estimation,
with ultra-wideband (UWB) ranging having gained widespread recognition due to
its low cost and centimeter-level out-of-box accuracy. Infrastructure-free
localization methods allow for more dynamic, ad-hoc, and flexible deployments,
yet they have received less attention from the research community. In this
work, we propose a cooperative relative multi-robot localization where we
leverage inter-robot ranging and simultaneous spatial detections of objects in
the environment. To achieve this, we equip robots with a single UWB transceiver
and a stereo camera. We propose a novel Monte-Carlo approach to estimate
relative states by either employing only UWB ranges or dynamically integrating
simultaneous spatial detections from the stereo cameras. We also address the
challenges for UWB ranging error mitigation, especially in non-line-of-sight,
with a study on different LSTM networks to estimate the ranging error. The
proposed approach has multiple benefits. First, we show that a single range is
enough to estimate the accurate relative states of two robots when fusing
odometry measurements. Second, our experiments also demonstrate that our
approach surpasses traditional methods such as multilateration in terms of
accuracy. Third, to increase accuracy even further, we allow for the
integration of cooperative spatial detections. Finally, we show how ROS 2 and
Zenoh can be integrated to build a scalable wireless communication solution for
multi-robot systems. The experimental validation includes real-time deployment
and autonomous navigation based on the relative positioning method
- …