64 research outputs found
Investigation of 3D-Mapping-Aided GNSS Navigation in Urban Canyons
In urban environments, the propagation of satellite signals is often affected by buildings, leading to challenges such as multipath interference and non-line-of-sight (NLOS) reception. These challenges compromise the performance of conventional GNSS positioning in cities. Several studies have shown that 3D mapping data significantly improves GNSS positioning by predicting which signals are line-of-sight (LOS) and which are NLOS.
This research builds on a new implementation of UCL's 3D-mapping-aided (3DMA) GNSS algorithms in MATLAB, encompassing satellite visibility prediction, shadow matching, likelihood-based ranging, and solution integration, demonstrating improved GNSS positioning performance in cities.
Building on these core algorithms, this research introduces several enhancements, including advanced satellite visibility prediction for overhanging structures, the incorporation of untracked satellites in shadow matching, the integration of Bayesian inference with shadow matching to adapt to varying urban densities, a new NLOS model tailored for likelihood-based ranging, and a clustering algorithm based on region growing to handle ambiguity. These enhancements collectively improve 3DMA GNSS performance by about 20%. Additionally, an outlier detection method mitigates the effects of mapping inaccuracies, reducing the positioning error by approximately 15% in both single- and multi-epoch cases, compared to methods without such detection.
3DMA GNSS is combined with particle and grid filters to enable multi-epoch positioning. These 3DMA GNSS filters are most beneficial for mobile positioning in dense urban areas, exhibiting a performance improvement of about 55% over conventional continuous positioning algorithms.
This research also introduces an efficient NLOS mitigation algorithm that empowers conventional GNSS algorithms to handle asymmetric distributions of NLOS measurement errors, without requiring supplementary information or hardware. This algorithm demonstrates improvements of 12% and 34% in single- and multi-epoch cases, respectively, compared to conventional positioning methods.
In summary, this research offers a comprehensive suite of advancements in 3DMA GNSS positioning, particularly in challenging urban landscapes, with promising implications for real-world applications
Outlier Detection for 3D-Mapping-Aided GNSS Positioning
This paper takes 3D-mapping-aided (3DMA) GNSS as an example and investigates the outlier detection for pattern matching based positioning. Three different test statistics, two in the measurement domain and one in the position domain, are presented. Two 3D city maps with different levels of detail were used, one of which contained two obvious errors, to demonstrate the performance of 3DMA GNSS positioning in the presence of errors in the mapping data. The experiments tested were conducted alongside busy roads in the London Borough of Camden, where a total of 8 sets of 2-minute static pedestrian navigation data were collected with a u-blox EVK M8T GNSS receiver. The results confirm that both 3D mapping errors and temporary environmental changes (such as passing vehicles) can have a significant negative impact on the performance of 3DMA GNSS positioning. After applying outlier detection, single-epoch 3DMA GNSS algorithm reduces the horizontal RMS position error by approximately 15% compared to that without outlier detection. The filtering algorithm attenuates the effects of temporary environmental changes, providing an improvement of about 15% over single-epoch positioning, while the outlier algorithm further reduces the RMS error to a comparable level to that of using high-accuracy maps, about 4.7m
Multi-Epoch 3D-Mapping-Aided Positioning using Bayesian Filtering Techniques
The performance of different filtering algorithms combined with 3D mapping-aided (3DMA) techniques is investigated in this paper. Several single- and multi-epoch filtering algorithms were implemented and then tested on static pedestrian navigation data collected in the City of London using a u-blox EVK M8T GNSS receiver and vehicle navigation data collected in Canary Wharf, London, by a trial van with a Racelogic Labsat 3 GNSS front-end. The results show that filtering has a greater impact on mobile positioning than static positioning, while 3DMA GNSS brings more significant improvements to positioning accuracy in denser environments than in more open areas. Thus, multi-epoch 3DMA GNSS filtering should bring the maximum benefit to mobile positioning in dense environments. In vehicle tests at Canary Wharf, 3DMA GNSS filtering reduced the RMS horizontal position error by approximately 68% and 57% compared to the single-epoch 3DMA GNSS and filtered conventional GNSS, respectively
Earth-Rock Dams’ Breach Modelling
Simulation of dam breach process has significant influence on the evaluation of consequence of dam breach flood. In this study, research progresses on the numerical modeling of earth-rock dams’ breach process are summarized, especially the latest research results of the author’s research team in recent years. However, there still has a considerable gap in the versatility of computer software and visualization technology of dam breaching process. It is suggested that more efforts should be made in the future to study the detailed physically based numerical model for core dam and concrete face rockfill dam; further, more attention should be paid to the application of visualization technology in dam breach process simulation. Finally, the universal and friendly visualization computer software that can accurately simulate the dam failure process and flood routing for earth-rock dams is sorely needed
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with l2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency
S3E: A Large-scale Multimodal Dataset for Collaborative SLAM
With the advanced request to employ a team of robots to perform a task
collaboratively, the research community has become increasingly interested in
collaborative simultaneous localization and mapping. Unfortunately, existing
datasets are limited in the scale and variation of the collaborative
trajectories, even though generalization between inter-trajectories among
different agents is crucial to the overall viability of collaborative tasks. To
help align the research community's contributions with realistic multiagent
ordinated SLAM problems, we propose S3E, a large-scale multimodal dataset
captured by a fleet of unmanned ground vehicles along four designed
collaborative trajectory paradigms. S3E consists of 7 outdoor and 5 indoor
sequences that each exceed 200 seconds, consisting of well temporal
synchronized and spatial calibrated high-frequency IMU, high-quality stereo
camera, and 360 degree LiDAR data. Crucially, our effort exceeds previous
attempts regarding dataset size, scene variability, and complexity. It has 4x
as much average recording time as the pioneering EuRoC dataset. We also provide
careful dataset analysis as well as baselines for collaborative SLAM and single
counterparts. Data and more up-to-date details are found at
https://github.com/PengYu-Team/S3E
A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks
Large language models (LLMs), such as GPT-3.5 and GPT-4, have greatly
advanced the performance of artificial systems on various natural language
processing tasks to human-like levels. However, their generalisation and
robustness to perform logical reasoning remain under-evaluated. To probe this
ability, we propose three new logical reasoning datasets named "ReClor-plus",
"LogiQA-plus" and "LogiQAv2-plus", each featuring three subsets: the first with
randomly shuffled options, the second with the correct choices replaced by
"none of the other options are correct", and a combination of the previous two
subsets. We carry out experiments on these datasets with both discriminative
and generative LLMs and show that these simple tricks greatly hinder the
performance of the language models. Despite their superior performance on the
original publicly available datasets, we find that all models struggle to
answer our newly constructed datasets. We show that introducing task variations
by perturbing a sizable training set can markedly improve the model's
generalisation and robustness in logical reasoning tasks. Moreover, applying
logic-driven data augmentation for fine-tuning, combined with prompting can
enhance the generalisation performance of both discriminative large language
models and generative large language models. These results offer insights into
assessing and improving the generalisation and robustness of large language
models for logical reasoning tasks. We make our source code and data publicly
available
\url{https://github.com/Strong-AI-Lab/Logical-and-abstract-reasoning}.Comment: Accepted for oral presentation at the LLM@IJCAI 2023 non-archival
symposiu
Exploring Self-Reinforcement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models
Learnersourcing involves students generating and sharing learning resources
with their peers. When learnersourcing multiple-choice questions, creating
explanations for the generated questions is a crucial step as it facilitates a
deeper understanding of the related concepts. However, it is often difficult
for students to craft effective explanations due to limited subject
understanding and a tendency to merely restate the question stem, distractors,
and correct answer. To help scaffold this task, in this work we propose a
self-reinforcement large-language-model framework, with the goal of generating
and evaluating explanations automatically. Comprising three modules, the
framework generates student-aligned explanations, evaluates these explanations
to ensure their quality and iteratively enhances the explanations. If an
explanation's evaluation score falls below a defined threshold, the framework
iteratively refines and reassesses the explanation. Importantly, our framework
emulates the manner in which students compose explanations at the relevant
grade level. For evaluation, we had a human subject-matter expert compare the
explanations generated by students with the explanations created by the
open-source large language model Vicuna-13B, a version of Vicuna-13B that had
been fine-tuned using our method, and by GPT-4. We observed that, when compared
to other large language models, GPT-4 exhibited a higher level of creativity in
generating explanations. We also found that explanations generated by GPT-4
were ranked higher by the human expert than both those created by the other
models and the original student-created explanations. Our findings represent a
significant advancement in enriching the learnersourcing experience for
students and enhancing the capabilities of large language models in educational
applications.Comment: Preprint. Under revie
Correlation Analysis of 3D Printability and Rheological Properties of Sodium Alginate Hydrogels
In this study, Ca2+-induced sodium alginate hydrogel was used as a model. The rheological properties were measured via steady-state shear, oscillation strain sweep, and yield stress. The network of sodium alginate hydrogels was analyzed using water distribution and rheological parameters. After a comprehensive analysis of the morphology and Micro-CT structure of 3D printing products, the mathematical relationship between rheological parameters and 3D printing effect was established using the Spearman's correlation analysis. The results showed that the highest score of 3D printing product was prepared at the mass ratio of SA to Ca2+ at 24:1 and the concentration of SA at 4.5%. At the same time, the filament structure of 3D printing product was fine and the porosity was 12.21%. Rheological parameters of K, η1, G', G", τ0 and τy were 255.1 Pa·sn, 2740 Pa·s, 3509 Pa, 673.2 Pa, 261.4 Pa, and 51.62 Pa, respectively. The capillary water (about 99.20%) was dominant in the gel network, showing strong water holding capacity of hydrogel. Results of correlation analysis showed that the viscosity properties (K, η1, and G") were negatively correlated with the extrudability, and the correlation coefficient was -0.577. The self-supporting capacity of the 3D printing product was positively correlated with the elastic modulus and stress (G', τ0, and τy) (P<0.05)
- …