8,714 research outputs found

    Cooperative Relative Positioning of Mobile Users by Fusing IMU Inertial and UWB Ranging Information

    Full text link
    Relative positioning between multiple mobile users is essential for many applications, such as search and rescue in disaster areas or human social interaction. Inertial-measurement unit (IMU) is promising to determine the change of position over short periods of time, but it is very sensitive to error accumulation over long term run. By equipping the mobile users with ranging unit, e.g. ultra-wideband (UWB), it is possible to achieve accurate relative positioning by trilateration-based approaches. As compared to vision or laser-based sensors, the UWB does not need to be with in line-of-sight and provides accurate distance estimation. However, UWB does not provide any bearing information and the communication range is limited, thus UWB alone cannot determine the user location without any ambiguity. In this paper, we propose an approach to combine IMU inertial and UWB ranging measurement for relative positioning between multiple mobile users without the knowledge of the infrastructure. We incorporate the UWB and the IMU measurement into a probabilistic-based framework, which allows to cooperatively position a group of mobile users and recover from positioning failures. We have conducted extensive experiments to demonstrate the benefits of incorporating IMU inertial and UWB ranging measurements.Comment: accepted by ICRA 201

    Occlusion-Robust MVO: Multimotion Estimation Through Occlusion Via Motion Closure

    Full text link
    Visual motion estimation is an integral and well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation, which is especially challenging in highly dynamic environments. Such environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Previous work in object tracking focuses on maintaining the integrity of object tracks but usually relies on specific appearance-based descriptors or constrained motion models. These approaches are very effective in specific applications but do not generalize to the full multimotion estimation problem. This paper presents a pipeline for estimating multiple motions, including the camera egomotion, in the presence of occlusions. This approach uses an expressive motion prior to estimate the SE (3) trajectory of every motion in the scene, even during temporary occlusions, and identify the reappearance of motions through motion closure. The performance of this occlusion-robust multimotion visual odometry (MVO) pipeline is evaluated on real-world data and the Oxford Multimotion Dataset.Comment: To appear at the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). An earlier version of this work first appeared at the Long-term Human Motion Planning Workshop (ICRA 2019). 8 pages, 5 figures. Video available at https://www.youtube.com/watch?v=o_N71AA6FR

    MEG and fMRI Fusion for Non-Linear Estimation of Neural and BOLD Signal Changes

    Get PDF
    The combined analysis of magnetoencephalography (MEG)/electroencephalography and functional magnetic resonance imaging (fMRI) measurements can lead to improvement in the description of the dynamical and spatial properties of brain activity. In this paper we empirically demonstrate this improvement using simulated and recorded task related MEG and fMRI activity. Neural activity estimates were derived using a dynamic Bayesian network with continuous real valued parameters by means of a sequential Monte Carlo technique. In synthetic data, we show that MEG and fMRI fusion improves estimation of the indirectly observed neural activity and smooths tracking of the blood oxygenation level dependent (BOLD) response. In recordings of task related neural activity the combination of MEG and fMRI produces a result with greater signal-to-noise ratio, that confirms the expectation arising from the nature of the experiment. The highly non-linear model of the BOLD response poses a difficult inference problem for neural activity estimation; computational requirements are also high due to the time and space complexity. We show that joint analysis of the data improves the system's behavior by stabilizing the differential equations system and by requiring fewer computational resources

    Towards Collaborative Simultaneous Localization and Mapping: a Survey of the Current Research Landscape

    Get PDF
    Motivated by the tremendous progress we witnessed in recent years, this paper presents a survey of the scientific literature on the topic of Collaborative Simultaneous Localization and Mapping (C-SLAM), also known as multi-robot SLAM. With fleets of self-driving cars on the horizon and the rise of multi-robot systems in industrial applications, we believe that Collaborative SLAM will soon become a cornerstone of future robotic applications. In this survey, we introduce the basic concepts of C-SLAM and present a thorough literature review. We also outline the major challenges and limitations of C-SLAM in terms of robustness, communication, and resource management. We conclude by exploring the area's current trends and promising research avenues.Comment: 44 pages, 3 figure

    Bedload transport analysis using image processing techniques

    Get PDF
    Bedload transport is an important factor to describe the hydromorphological processes of fluvial systems. However, conventional bedload sampling methods have large uncertainty, making it harder to understand this notoriously complex phenomenon. In this study, a novel, image-based approach, the Video-based Bedload Tracker (VBT), is implemented to quantify gravel bedload transport by combining two different techniques: Statistical Background Model and Large-Scale Particle Image Velocimetry. For testing purposes, we use underwater videos, captured in a laboratory flume, with future field adaptation as an overall goal. VBT offers a full statistics of the individual velocity and grainsize data for the moving particles. The paper introduces the testing of the method which requires minimal preprocessing (a simple and quick 2D Gaussian filter) to retrieve and calculate bedload transport rate. A detailed sensitivity analysis is also carried out to introduce the parameters of the method, during which it was found that by simply relying on literature and the visual evaluation of the resulting segmented videos, it is simple to set them to the correct values. Practical aspects of the applicability of VBT in the field are also discussed and a statistical filter, accounting for the suspended sediment and air bubbles, is provided

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP
    corecore