2,664 research outputs found

    Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging

    Full text link
    The implementation challenges of cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging are discussed and work on the subject is reviewed. System architecture and sensor fusion are identified as key challenges. A partially decentralized system architecture based on step-wise inertial navigation and step-wise dead reckoning is presented. This architecture is argued to reduce the computational cost and required communication bandwidth by around two orders of magnitude while only giving negligible information loss in comparison with a naive centralized implementation. This makes a joint global state estimation feasible for up to a platoon-sized group of agents. Furthermore, robust and low-cost sensor fusion for the considered setup, based on state space transformation and marginalization, is presented. The transformation and marginalization are used to give the necessary flexibility for presented sampling based updates for the inter-agent ranging and ranging free fusion of the two feet of an individual agent. Finally, characteristics of the suggested implementation are demonstrated with simulations and a real-time system implementation.Comment: 14 page

    Infrastructure Wi-Fi for connected autonomous vehicle positioning : a review of the state-of-the-art

    Get PDF
    In order to realize intelligent vehicular transport networks and self driving cars, connected autonomous vehicles (CAVs) are required to be able to estimate their position to the nearest centimeter. Traditional positioning in CAVs is realized by using a global navigation satellite system (GNSS) such as global positioning system (GPS) or by fusing weighted location parameters from a GNSS with an inertial navigation systems (INSs). In urban environments where Wi-Fi coverage is ubiquitous and GNSS signals experience signal blockage, multipath or non line-of-sight (NLOS) propagation, enterprise or carrier-grade Wi-Fi networks can be opportunistically used for localization or “fused” with GNSS to improve the localization accuracy and precision. While GNSS-free localization systems are in the literature, a survey of vehicle localization from the perspective of a Wi-Fi anchor/infrastructure is limited. Consequently, this review seeks to investigate recent technological advances relating to positioning techniques between an ego vehicle and a vehicular network infrastructure. Also discussed in this paper is an analysis of the location accuracy, complexity and applicability of surveyed literature with respect to intelligent transportation system requirements for CAVs. It is envisaged that hybrid vehicular localization systems will enable pervasive localization services for CAVs as they travel through urban canyons, dense foliage or multi-story car parks

    Fusing information from two navigation system using an upper bound on their maximum spatial separation

    Get PDF
    Abstract-A method is proposed to fuse the information from two navigation systems whose relative position is unknown, but where there exists an upper limit on how far apart the two systems can be. The proposed information fusion method is applied to a scenario in which a pedestrian is equipped with two foot-mounted zero-velocity-aided inertial navigation systems; one system on each foot. The performance of the method is studied using experimental data. The results show that the method has the capability to significantly improve the navigation performance when compared to using two uncoupled foot-mounted systems

    An Acoustic Network Navigation System

    Get PDF
    This work describes a system for acoustic‐based navigation that relies on the addition of localization services to underwater networks. The localization capability has been added on top of an existing network, without imposing constraints on its structure/operation. The approach is based on the inclusion of timing information within acoustic messages through which it is possible to know the time of an acoustic transmission in relation to its reception. Exploiting such information at the network application level makes it possible to create an interrogation scheme similar to that of a long baseline. The advantage is that the nodes/autonomous underwater vehicles (AUVs) themselves become the transponders of a network baseline, and hence there is no need for dedicated instrumentation. The paper reports at sea results obtained from the COLLAB–NGAS14 experimental campaign. During the sea trial, the approach was implemented within an operational network in different configurations to support the navigation of the two Centre for Maritime Research and Experimentation Ocean Explorer (CMRE OEX) vehicles. The obtained results demonstrate that it is possible to support AUV navigation without constraining the network design and with a minimum communication overhead. Alternative solutions (e.g., synchronized clocks or two‐way‐travel‐time interrogations) might provide higher precision or accuracy, but they come at the cost of impacting on the network design and/or on the interrogation strategies. Results are discussed, and the performance achieved at sea demonstrates the viability to use the system in real, large‐scale operations involving multiple AUVs. These results represent a step toward location‐aware underwater networks that are able to provide node localization as a service

    Image quality assessment for iris biometric

    Get PDF
    Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is the most reliable biometric in terms of recognition and identification performance. However, performance of these systems is affected by poor quality imaging. In this work, we extend previous research efforts on iris quality assessment by analyzing the effect of seven quality factors: defocus blur, motion blur, off-angle, occlusion, specular reflection, lighting, and pixel-counts on the performance of traditional iris recognition system. We have concluded that defocus blur, motion blur, and off-angle are the factors that affect recognition performance the most. We further designed a fully automated iris image quality evaluation block that operates in two steps. First each factor is estimated individually, then the second step involves fusing the estimated factors by using Dempster-Shafer theory approach to evidential reasoning. The designed block is tested on two datasets, CASIA 1.0 and a dataset collected at WVU. (Abstract shortened by UMI.)

    Realtime Color Stereovision Processing

    Get PDF
    Recent developments in aviation have made micro air vehicles (MAVs) a reality. These featherweight palm-sized radio-controlled flying saucers embody the future of air-to-ground combat. No one has ever successfully implemented an autonomous control system for MAVs. Because MAVs are physically small with limited energy supplies, video signals offer superiority over radar for navigational applications. This research takes a step forward in real time machine vision processing. It investigates techniques for implementing a real time stereovision processing system using two miniature color cameras. The effects of poor-quality optics are overcome by a robust algorithm, which operates in real time and achieves frame rates up to 10 fps in ideal conditions. The vision system implements innovative work in the following five areas of vision processing: fast image registration preprocessing, object detection, feature correspondence, distortion-compensated ranging, and multi scale nominal frequency-based object recognition. Results indicate that the system can provide adequate obstacle avoidance feedback for autonomous vehicle control. However, typical relative position errors are about 10%-to high for surveillance applications. The range of operation is also limited to between 6 - 30 m. The root of this limitation is imprecise feature correspondence: with perfect feature correspondence the range would extend to between 0.5 - 30 m. Stereo camera separation limits the near range, while optical resolution limits the far range. Image frame sizes are 160x120 pixels. Increasing this size will improve far range characteristics but will also decrease frame rate. Image preprocessing proved to be less appropriate than precision camera alignment in this application. A proof of concept for object recognition shows promise for applications with more precise object detection. Future recommendations are offered in all five areas of vision processing
    • 

    corecore