110 research outputs found

    Probably Unknown: Deep Inverse Sensor Modelling In Radar

    Full text link
    Radar presents a promising alternative to lidar and vision in autonomous vehicle applications, able to detect objects at long range under a variety of weather conditions. However, distinguishing between occupied and free space from raw radar power returns is challenging due to complex interactions between sensor noise and occlusion. To counter this we propose to learn an Inverse Sensor Model (ISM) converting a raw radar scan to a grid map of occupancy probabilities using a deep neural network. Our network is self-supervised using partial occupancy labels generated by lidar, allowing a robot to learn about world occupancy from past experience without human supervision. We evaluate our approach on five hours of data recorded in a dynamic urban environment. By accounting for the scene context of each grid cell our model is able to successfully segment the world into occupied and free space, outperforming standard CFAR filtering approaches. Additionally by incorporating heteroscedastic uncertainty into our model formulation, we are able to quantify the variance in the uncertainty throughout the sensor observation. Through this mechanism we are able to successfully identify regions of space that are likely to be occluded.Comment: 6 full pages, 1 page of reference

    Radar-based localization and mapping for large-scale environments and adverse weather conditions

    Get PDF
    In mobile robotics, localization and mapping is one of the fundamental capabilities towards autonomy. Navigating autonomously in large-scale, unstructured, extreme and dynamical environments is particularly challenging due to the high variations in the scene. To deliver a robotic system that can operate 24/7 in outdoor environment, we need to design a state estimation system that is robust in all weather conditions. In this thesis, we propose, implement and validate three systems to tackle the problem of long-term localization and mapping. We focus on using radar-only platform to realize the SLAM and localization systems in probabilistic manners. We first introduce a radar-based SLAM system that can operate in city-scale environment. Second, we present an improved version of the radar-based SLAM system with enhanced odometry estimation capability and extensive experiments on extreme weather conditions, proving that our proposed radar SLAM solution is viable in all weather conditions. We also demonstrate the superiority of radar-based SLAM system compared to LiDAR and vision based system in snowy and low light conditions respectively. Finally, we show how to combine online public maps and radar sensor to achieve accurate localization even we do not have a prior sensor map. We show that our proposed localization system can generalize to different scenarios and we validate it across three datasets collected in three different continents

    Lidar-level localization with radar? The CFEAR approach to accurate, fast and robust large-scale radar odometry in diverse environments

    Full text link
    This paper presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments -- outdoors, from urban to woodland, and indoors in warehouses and mines - without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach CFEAR, we present an in-depth investigation on a wider range of data sets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar SLAM and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160Hz.Comment: Accepted for publication in Transactions on Robotics. Edited 2022-11-07: Updated affiliation and citatio

    Deep probabilistic methods for improved radar sensor modelling and pose estimation

    Get PDF
    Radar’s ability to sense under adverse conditions and at far-range makes it a valuable alternative to vision and lidar for mobile robotic applications. However, its complex, scene-dependent sensing process and significant noise artefacts makes working with radar challenging. Moving past classical rule-based approaches, which have dominated the literature to date, this thesis investigates deep and data-driven solutions across a range of tasks in robotics. Firstly, a deep approach is developed for mapping raw sensor measurements to a grid-map of occupancy probabilities, outperforming classical filtering approaches by a significant margin. A distribution over the occupancy state is captured, additionally allowing uncertainty in predictions to be identified and managed. The approach is trained entirely using partial labels generated automatically from lidar, without requiring manual labelling. Next, a deep model is proposed for generating stochastic radar measurements from simulated elevation maps. The model is trained by learning the forward and backward processes side-by-side, using a combination of adversarial and cyclical consistency constraints in combination with a partial alignment loss, using labels generated in lidar. By faithfully replicating the radar sensing process, new models can be trained for down-stream tasks, using labels that are readily available in simulation. In this case, segmentation models trained on simulated radar measurements, when deployed in the real world, are shown to approach the performance of a model trained entirely on real-world measurements. Finally, the potential of deep approaches applied to the radar odometry task are explored. A learnt feature space is combined with a classical correlative scan matching procedure and optimised for pose prediction, allowing the proposed method to outperform the previous state-of-the-art by a significant margin. Through a probabilistic consideration the uncertainty in the pose is also successfully characterised. Building upon this success, properties of the Fourier Transform are then utilised to separate the search for translation and angle. It is shown that this decoupled search results in a significant boost to run-time performance, allowing the approach to run in real-time on CPUs and embedded devices, whilst remaining competitive with other radar odometry methods proposed in the literature

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance

    Wireless location : from theory to practice

    Get PDF

    Localisation of ground range sensors using overhead imagery

    Get PDF
    This thesis is about outdoor localisation using range sensors as an active sensor and cheap, publicly available satellite or `overhead' imagery as a prior map. Range sensors such as lidars and spinning FMCW radars are ideal for large-scale, outdoor autonomous navigation due to their long sensing range, invariance to lighting conditions, and robustness against weather changes. Nevertheless, existing methods for range sensor localisation typically rely on prior maps collected from a previous mapping phase. On the other hand, off-the-shelf overhead imagery, such as public satellite images, is readily available almost anywhere in the world and can be acquired easily from the internet with little cost or effort. Public overhead imagery can capture geometric cues of the scene also observable by ground lidars and radars, therefore having the capability to act as a map for range sensor localisation. In particular, in corner case scenarios where the prior sensory map is unusable or unavailable, for example if the robot travels to a place it has not visited before, public overhead imagery can act as an alternative map source for range sensor localisation as a fall-back choice. Under normal operation conditions, the localisation result by comparing range sensor data against overhead imagery can act as an additional information source for redundancy. In this thesis, we present various methods to solve the localisation of a ground range sensor using overhead imagery by learning from data, enabling them to adapt to different environments. This surpasses the methods in literature which employ hand-crafted features designed for only specific types of scenery. Specifically, we address both topological localisation, also known as place recognition, and metric localisation in overhead imagery maps. Furthermore, we investigate self-supervised strategies that allow the tasks to be learned without accurate ground truth data

    Edge Artificial Intelligence for Real-Time Target Monitoring

    Get PDF
    The key enabling technology for the exponentially growing cellular communications sector is location-based services. The need for location-aware services has increased along with the number of wireless and mobile devices. Estimation problems, and particularly parameter estimation, have drawn a lot of interest because of its relevance and engineers' ongoing need for higher performance. As applications expanded, a lot of interest was generated in the accurate assessment of temporal and spatial properties. In the thesis, two different approaches to subject monitoring are thoroughly addressed. For military applications, medical tracking, industrial workers, and providing location-based services to the mobile user community, which is always growing, this kind of activity is crucial. In-depth consideration is given to the viability of applying the Angle of Arrival (AoA) and Receiver Signal Strength Indication (RSSI) localization algorithms in real-world situations. We presented two prospective systems, discussed them, and presented specific assessments and tests. These systems were put to the test in diverse contexts (e.g., indoor, outdoor, in water...). The findings showed the localization capability, but because of the low-cost antenna we employed, this method is only practical up to a distance of roughly 150 meters. Consequently, depending on the use-case, this method may or may not be advantageous. An estimation algorithm that enhances the performance of the AoA technique was implemented on an edge device. Another approach was also considered. Radar sensors have shown to be durable in inclement weather and bad lighting conditions. Frequency Modulated Continuous Wave (FMCW) radars are the most frequently employed among the several sorts of radar technologies for these kinds of applications. Actually, this is because they are low-cost and can simultaneously provide range and Doppler data. In comparison to pulse and Ultra Wide Band (UWB) radar sensors, they also need a lower sample rate and a lower peak to average ratio. The system employs a cutting-edge surveillance method based on widely available FMCW radar technology. The data processing approach is built on an ad hoc-chain of different blocks that transforms data, extract features, and make a classification decision before cancelling clutters and leakage using a frame subtraction technique, applying DL algorithms to Range-Doppler (RD) maps, and adding a peak to cluster assignment step before tracking targets. In conclusion, the FMCW radar and DL technique for the RD maps performed well together for indoor use-cases. The aforementioned tests used an edge device and Infineon Technologies' Position2Go FMCW radar tool-set

    Wi-Fi based people tracking in challenging environments

    Get PDF
    People tracking is a key building block in many applications such as abnormal activity detection, gesture recognition, and elderly persons monitoring. Video-based systems have many limitations making them ineffective in many situations. Wi-Fi provides an easily accessible source of opportunity for people tracking that does not have the limitations of video-based systems. The system will detect, localise, and track people, based on the available Wi-Fi signals that are reflected from their bodies. Wi-Fi based systems still need to address some challenges in order to be able to operate in challenging environments. Some of these challenges include the detection of the weak signal, the detection of abrupt people motion, and the presence of multipath propagation. In this thesis, these three main challenges will be addressed. Firstly, a weak signal detection method that uses the changes in the signals that are reflected from static objects, to improve the detection probability of weak signals that are reflected from the person’s body. Then, a deep learning based Wi-Fi localisation technique is proposed that significantly improves the runtime and the accuracy in comparison with existing techniques. After that, a quantum mechanics inspired tracking method is proposed to address the abrupt motion problem. The proposed method uses some interesting phenomena in the quantum world, where the person is allowed to exist at multiple positions simultaneously. The results show a significant improvement in reducing the tracking error and in reducing the tracking delay
    corecore