59 research outputs found

    Outlier Detection for 3D-Mapping-Aided GNSS Positioning

    Get PDF
    This paper takes 3D-mapping-aided (3DMA) GNSS as an example and investigates the outlier detection for pattern matching based positioning. Three different test statistics, two in the measurement domain and one in the position domain, are presented. Two 3D city maps with different levels of detail were used, one of which contained two obvious errors, to demonstrate the performance of 3DMA GNSS positioning in the presence of errors in the mapping data. The experiments tested were conducted alongside busy roads in the London Borough of Camden, where a total of 8 sets of 2-minute static pedestrian navigation data were collected with a u-blox EVK M8T GNSS receiver. The results confirm that both 3D mapping errors and temporary environmental changes (such as passing vehicles) can have a significant negative impact on the performance of 3DMA GNSS positioning. After applying outlier detection, single-epoch 3DMA GNSS algorithm reduces the horizontal RMS position error by approximately 15% compared to that without outlier detection. The filtering algorithm attenuates the effects of temporary environmental changes, providing an improvement of about 15% over single-epoch positioning, while the outlier algorithm further reduces the RMS error to a comparable level to that of using high-accuracy maps, about 4.7m

    Multi-Epoch 3D-Mapping-Aided Positioning using Bayesian Filtering Techniques

    Get PDF
    The performance of different filtering algorithms combined with 3D mapping-aided (3DMA) techniques is investigated in this paper. Several single- and multi-epoch filtering algorithms were implemented and then tested on static pedestrian navigation data collected in the City of London using a u-blox EVK M8T GNSS receiver and vehicle navigation data collected in Canary Wharf, London, by a trial van with a Racelogic Labsat 3 GNSS front-end. The results show that filtering has a greater impact on mobile positioning than static positioning, while 3DMA GNSS brings more significant improvements to positioning accuracy in denser environments than in more open areas. Thus, multi-epoch 3DMA GNSS filtering should bring the maximum benefit to mobile positioning in dense environments. In vehicle tests at Canary Wharf, 3DMA GNSS filtering reduced the RMS horizontal position error by approximately 68% and 57% compared to the single-epoch 3DMA GNSS and filtered conventional GNSS, respectively

    Earth-Rock Dams’ Breach Modelling

    Get PDF
    Simulation of dam breach process has significant influence on the evaluation of consequence of dam breach flood. In this study, research progresses on the numerical modeling of earth-rock dams’ breach process are summarized, especially the latest research results of the author’s research team in recent years. However, there still has a considerable gap in the versatility of computer software and visualization technology of dam breaching process. It is suggested that more efforts should be made in the future to study the detailed physically based numerical model for core dam and concrete face rockfill dam; further, more attention should be paid to the application of visualization technology in dam breach process simulation. Finally, the universal and friendly visualization computer software that can accurately simulate the dam failure process and flood routing for earth-rock dams is sorely needed

    Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Get PDF
    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with l2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency

    S3E: A Large-scale Multimodal Dataset for Collaborative SLAM

    Full text link
    With the advanced request to employ a team of robots to perform a task collaboratively, the research community has become increasingly interested in collaborative simultaneous localization and mapping. Unfortunately, existing datasets are limited in the scale and variation of the collaborative trajectories, even though generalization between inter-trajectories among different agents is crucial to the overall viability of collaborative tasks. To help align the research community's contributions with realistic multiagent ordinated SLAM problems, we propose S3E, a large-scale multimodal dataset captured by a fleet of unmanned ground vehicles along four designed collaborative trajectory paradigms. S3E consists of 7 outdoor and 5 indoor sequences that each exceed 200 seconds, consisting of well temporal synchronized and spatial calibrated high-frequency IMU, high-quality stereo camera, and 360 degree LiDAR data. Crucially, our effort exceeds previous attempts regarding dataset size, scene variability, and complexity. It has 4x as much average recording time as the pioneering EuRoC dataset. We also provide careful dataset analysis as well as baselines for collaborative SLAM and single counterparts. Data and more up-to-date details are found at https://github.com/PengYu-Team/S3E

    A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks

    Full text link
    Large language models (LLMs), such as GPT-3.5 and GPT-4, have greatly advanced the performance of artificial systems on various natural language processing tasks to human-like levels. However, their generalisation and robustness to perform logical reasoning remain under-evaluated. To probe this ability, we propose three new logical reasoning datasets named "ReClor-plus", "LogiQA-plus" and "LogiQAv2-plus", each featuring three subsets: the first with randomly shuffled options, the second with the correct choices replaced by "none of the other options are correct", and a combination of the previous two subsets. We carry out experiments on these datasets with both discriminative and generative LLMs and show that these simple tricks greatly hinder the performance of the language models. Despite their superior performance on the original publicly available datasets, we find that all models struggle to answer our newly constructed datasets. We show that introducing task variations by perturbing a sizable training set can markedly improve the model's generalisation and robustness in logical reasoning tasks. Moreover, applying logic-driven data augmentation for fine-tuning, combined with prompting can enhance the generalisation performance of both discriminative large language models and generative large language models. These results offer insights into assessing and improving the generalisation and robustness of large language models for logical reasoning tasks. We make our source code and data publicly available \url{https://github.com/Strong-AI-Lab/Logical-and-abstract-reasoning}.Comment: Accepted for oral presentation at the LLM@IJCAI 2023 non-archival symposiu

    Exploring Self-Reinforcement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

    Full text link
    Learnersourcing involves students generating and sharing learning resources with their peers. When learnersourcing multiple-choice questions, creating explanations for the generated questions is a crucial step as it facilitates a deeper understanding of the related concepts. However, it is often difficult for students to craft effective explanations due to limited subject understanding and a tendency to merely restate the question stem, distractors, and correct answer. To help scaffold this task, in this work we propose a self-reinforcement large-language-model framework, with the goal of generating and evaluating explanations automatically. Comprising three modules, the framework generates student-aligned explanations, evaluates these explanations to ensure their quality and iteratively enhances the explanations. If an explanation's evaluation score falls below a defined threshold, the framework iteratively refines and reassesses the explanation. Importantly, our framework emulates the manner in which students compose explanations at the relevant grade level. For evaluation, we had a human subject-matter expert compare the explanations generated by students with the explanations created by the open-source large language model Vicuna-13B, a version of Vicuna-13B that had been fine-tuned using our method, and by GPT-4. We observed that, when compared to other large language models, GPT-4 exhibited a higher level of creativity in generating explanations. We also found that explanations generated by GPT-4 were ranked higher by the human expert than both those created by the other models and the original student-created explanations. Our findings represent a significant advancement in enriching the learnersourcing experience for students and enhancing the capabilities of large language models in educational applications.Comment: Preprint. Under revie

    Correlation Analysis of 3D Printability and Rheological Properties of Sodium Alginate Hydrogels

    Get PDF
    In this study, Ca2+-induced sodium alginate hydrogel was used as a model. The rheological properties were measured via steady-state shear, oscillation strain sweep, and yield stress. The network of sodium alginate hydrogels was analyzed using water distribution and rheological parameters. After a comprehensive analysis of the morphology and Micro-CT structure of 3D printing products, the mathematical relationship between rheological parameters and 3D printing effect was established using the Spearman's correlation analysis. The results showed that the highest score of 3D printing product was prepared at the mass ratio of SA to Ca2+ at 24:1 and the concentration of SA at 4.5%. At the same time, the filament structure of 3D printing product was fine and the porosity was 12.21%. Rheological parameters of K, η1, G', G", τ0 and τy were 255.1 Pa·sn, 2740 Pa·s, 3509 Pa, 673.2 Pa, 261.4 Pa, and 51.62 Pa, respectively. The capillary water (about 99.20%) was dominant in the gel network, showing strong water holding capacity of hydrogel. Results of correlation analysis showed that the viscosity properties (K, η1, and G") were negatively correlated with the extrudability, and the correlation coefficient was -0.577. The self-supporting capacity of the 3D printing product was positively correlated with the elastic modulus and stress (G', τ0, and τy) (P<0.05)

    Enhancing Logical Reasoning of Large Language Models through Logic-Driven Data Augmentation

    Full text link
    Combining large language models with logical reasoning enhance their capacity to address problems in a robust and reliable manner. Nevertheless, the intricate nature of logical reasoning poses challenges to gathering reliable data from web for building comprehensive training datasets, subsequently affecting the performance on downstream tasks. To address this, we introduce a novel logic-driven data augmentation approach, AMR-LDA. AMR-LDA converts the original text into an Abstract Meaning Representation (AMR) graph, a structured semantic representation that encapsulates the logic structure of the sentence, upon which operations are performed to generate logically modified AMR graphs. The modified AMR graphs are subsequently converted back into texts to create augmented data. Notably, our methodology is architecture-agnostic and enhances generative large language models, such as GPT-3.5 and GPT-4, through prompt augmentation, and fine-tuning discriminative large language models through contrastive learning with logic-driven data augmentation. Empirical evidence underscores the efficacy of our proposed method with improvement in performance across seven downstream tasks, such as logical reasoning reading comprehension, textual entailment, and natural language inference. Furthermore, our method ranked first on the ReClor leaderboard \url{https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347}. The source code and data are publicly available \url{https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning}.Comment: Accepted for oral presentation at the LLM@IJCAI 2023 non-archival symposiu
    • …
    corecore