2,664 research outputs found
Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging
The implementation challenges of cooperative localization by dual
foot-mounted inertial sensors and inter-agent ranging are discussed and work on
the subject is reviewed. System architecture and sensor fusion are identified
as key challenges. A partially decentralized system architecture based on
step-wise inertial navigation and step-wise dead reckoning is presented. This
architecture is argued to reduce the computational cost and required
communication bandwidth by around two orders of magnitude while only giving
negligible information loss in comparison with a naive centralized
implementation. This makes a joint global state estimation feasible for up to a
platoon-sized group of agents. Furthermore, robust and low-cost sensor fusion
for the considered setup, based on state space transformation and
marginalization, is presented. The transformation and marginalization are used
to give the necessary flexibility for presented sampling based updates for the
inter-agent ranging and ranging free fusion of the two feet of an individual
agent. Finally, characteristics of the suggested implementation are
demonstrated with simulations and a real-time system implementation.Comment: 14 page
Infrastructure Wi-Fi for connected autonomous vehicle positioning : a review of the state-of-the-art
In order to realize intelligent vehicular transport networks and self driving cars, connected autonomous vehicles (CAVs) are required to be able to estimate their position to the nearest centimeter. Traditional positioning in CAVs is realized by using a global navigation satellite system (GNSS) such as global positioning system (GPS) or by fusing weighted location parameters from a GNSS with an inertial navigation systems (INSs). In urban environments where Wi-Fi coverage is ubiquitous and GNSS signals experience signal blockage, multipath or non line-of-sight (NLOS) propagation, enterprise or carrier-grade Wi-Fi networks can be opportunistically used for localization or âfusedâ with GNSS to improve the localization accuracy and precision. While GNSS-free localization systems are in the literature, a survey of vehicle localization from the perspective of a Wi-Fi anchor/infrastructure is limited. Consequently, this review seeks to investigate recent technological advances relating to positioning techniques between an ego vehicle and a vehicular network infrastructure. Also discussed in this paper is an analysis of the location accuracy, complexity and applicability of surveyed literature with respect to intelligent transportation system requirements for CAVs. It is envisaged that hybrid vehicular localization systems will enable pervasive localization services for CAVs as they travel through urban canyons, dense foliage or multi-story car parks
Fusing information from two navigation system using an upper bound on their maximum spatial separation
Abstract-A method is proposed to fuse the information from two navigation systems whose relative position is unknown, but where there exists an upper limit on how far apart the two systems can be. The proposed information fusion method is applied to a scenario in which a pedestrian is equipped with two foot-mounted zero-velocity-aided inertial navigation systems; one system on each foot. The performance of the method is studied using experimental data. The results show that the method has the capability to significantly improve the navigation performance when compared to using two uncoupled foot-mounted systems
An Acoustic Network Navigation System
This work describes a system for acousticâbased navigation that relies on the addition of localization services to underwater networks. The localization capability has been added on top of an existing network, without imposing constraints on its structure/operation. The approach is based on the inclusion of timing information within acoustic messages through which it is possible to know the time of an acoustic transmission in relation to its reception. Exploiting such information at the network application level makes it possible to create an interrogation scheme similar to that of a long baseline. The advantage is that the nodes/autonomous underwater vehicles (AUVs) themselves become the transponders of a network baseline, and hence there is no need for dedicated instrumentation. The paper reports at sea results obtained from the COLLABâNGAS14 experimental campaign. During the sea trial, the approach was implemented within an operational network in different configurations to support the navigation of the two Centre for Maritime Research and Experimentation Ocean Explorer (CMRE OEX) vehicles. The obtained results demonstrate that it is possible to support AUV navigation without constraining the network design and with a minimum communication overhead. Alternative solutions (e.g., synchronized clocks or twoâwayâtravelâtime interrogations) might provide higher precision or accuracy, but they come at the cost of impacting on the network design and/or on the interrogation strategies. Results are discussed, and the performance achieved at sea demonstrates the viability to use the system in real, largeâscale operations involving multiple AUVs. These results represent a step toward locationâaware underwater networks that are able to provide node localization as a service
Image quality assessment for iris biometric
Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is the most reliable biometric in terms of recognition and identification performance. However, performance of these systems is affected by poor quality imaging. In this work, we extend previous research efforts on iris quality assessment by analyzing the effect of seven quality factors: defocus blur, motion blur, off-angle, occlusion, specular reflection, lighting, and pixel-counts on the performance of traditional iris recognition system. We have concluded that defocus blur, motion blur, and off-angle are the factors that affect recognition performance the most. We further designed a fully automated iris image quality evaluation block that operates in two steps. First each factor is estimated individually, then the second step involves fusing the estimated factors by using Dempster-Shafer theory approach to evidential reasoning. The designed block is tested on two datasets, CASIA 1.0 and a dataset collected at WVU. (Abstract shortened by UMI.)
Recommended from our members
Image Understanding and Robotics Research at Columbia University
Over the past year, the research investigations of the Vision/Robotics Laboratory at Columbia University have reflected the interests of its four faculty members, two staff programmers, and 16 Ph.D. students. Several of the projects involve other faculty members in the department or the university, or researchers at AT&T, IBM, or Philips. We list below a summary of our interests and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative. The majority of our current investigations are deepenings of work reported last year; this was the second year of both our basic Image Understanding contract and our Strategic Computing contract. Therefore, the form of this year's report closely resembles last year's. Although there are a few new initiatives, mainly we report the new results we have obtained in the same five basic research areas. Much of this work is summarized on a video tape that is available on request. We also note two service contributions this past year. The Special Issue on Computer Vision of the Proceedings of the IEEE, August, 1988, was co-edited by one of us (John Kender [27]). And, the upcoming IEEE Computer Society Conference on Computer Vision and Pattem Recognition, June, 1989, is co-program chaired by one of us (John Kender [23])
Realtime Color Stereovision Processing
Recent developments in aviation have made micro air vehicles (MAVs) a reality. These featherweight palm-sized radio-controlled flying saucers embody the future of air-to-ground combat. No one has ever successfully implemented an autonomous control system for MAVs. Because MAVs are physically small with limited energy supplies, video signals offer superiority over radar for navigational applications. This research takes a step forward in real time machine vision processing. It investigates techniques for implementing a real time stereovision processing system using two miniature color cameras. The effects of poor-quality optics are overcome by a robust algorithm, which operates in real time and achieves frame rates up to 10 fps in ideal conditions. The vision system implements innovative work in the following five areas of vision processing: fast image registration preprocessing, object detection, feature correspondence, distortion-compensated ranging, and multi scale nominal frequency-based object recognition. Results indicate that the system can provide adequate obstacle avoidance feedback for autonomous vehicle control. However, typical relative position errors are about 10%-to high for surveillance applications. The range of operation is also limited to between 6 - 30 m. The root of this limitation is imprecise feature correspondence: with perfect feature correspondence the range would extend to between 0.5 - 30 m. Stereo camera separation limits the near range, while optical resolution limits the far range. Image frame sizes are 160x120 pixels. Increasing this size will improve far range characteristics but will also decrease frame rate. Image preprocessing proved to be less appropriate than precision camera alignment in this application. A proof of concept for object recognition shows promise for applications with more precise object detection. Future recommendations are offered in all five areas of vision processing
- âŠ