5 research outputs found

    Multihop Rendezvous Algorithm for Frequency Hopping Cognitive Radio Networks

    Get PDF
    Cognitive radios allow the possibility of increasing utilization of the wireless spectrum, but because of their dynamic access nature require new techniques for establishing and joining networks, these are known as rendezvous. Existing rendezvous algorithms assume that rendezvous can be completed in a single round or hop of time. However, cognitive radio networks utilizing frequency hopping that is too fast for synchronization packets to be exchanged in a single hop require a rendezvous algorithm that supports multiple hop rendezvous. We propose the Multiple Hop (MH) rendezvous algorithm based on a pre-shared sequence of random numbers, bounded timing differences, and similar channel lists to successfully match a percentage of hops. It is tested in simulation against other well known rendezvous algorithms and implemented in GNU Radio for the HackRF One. We found from the results of our simulation testing that at 100 hops per second the MH algorithm is faster than other tested algorithms at 50 or more channels with timing ±50 milliseconds, at 250 or more channels with timing ±500 milliseconds, and at 2000 channels with timing ±5000 milliseconds. In an asymmetric environment with 100 hops per second, a 500 millisecond timing difference, and 1000 channels the MH algorithm was faster than other tested algorithms as long as the channel overlap was 35% or higher for a 50% required packet success to complete rendezvous. We recommend the Multihop algorithm for use cases with a fast frequency hop rate and a slow data transmission rate requiring multiple hops to rendezvous or use cases where the channel count equals or exceeds 250 channels, as long as timing data is available and all of the radios to be connected to the network can be pre-loaded with a shared seed

    Inertial Navigation Aided by Monocular Camera Observations of Unknown Features

    No full text

    Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    Get PDF
    Navigation through an indoor environment is a formidable challenge for an autonomous micro air vehicle. One solution is a vision aided inertial navigation system using depth-from-defocus to determine heading and depth to features in the scene. Depth-from-defocus uses a focal blur pattern to estimate depth. As depth increases, the observable change in the focal blur is generally reduced. Consequently, as the depth of a feature to be measured increases, the measurement performance decreases. The Fresnel zone plate, used as an aperture, introduces multiple focal planes. Interference between the multiple focal planes produce changes in the aperture that extend the depth at which changes in the focal blur are observable. This improved depth measurement performance results in improved performance of the vision aided navigation system as well. This research provides an in-depth study of the Fresnel zone plate used as a coded aperture and the performance improvement obtained by augmenting a single camera vision aided inertial navigation system

    Development and Flight of a Robust Optical-Inertial Navigation System Using Low-Cost Sensors

    Get PDF
    This research develops and tests a precision navigation algorithm fusing optical and inertial measurements of unknown objects at unknown locations. It provides an alternative to the Global Positioning System (GPS) as a precision navigation source, enabling passive and low-cost navigation in situations where GPS is denied/unavailable. This paper describes two new contributions. First, a rigorous study of the fundamental nature of optical/inertial navigation is accomplished by examining the observability grammian of the underlying measurement equations. This analysis yields a set of design principles guiding the development of optical/inertial navigation algorithms. The second contribution of this research is the development and flight test of an optical-inertial navigation system using low-cost and passive sensors (including an inexpensive commercial-grade inertial sensor, which is unsuitable for navigation by itself). This prototype system was built and flight tested at the U.S. Air Force Test Pilot School. The algorithm that was implemented leveraged the design principles described above, and used images from a single camera. It was shown (and explained by the observability analysis) that the system gained significant performance by aiding it with a barometric altimeter and magnetic compass, and by using a digital terrain database (DTED). The (still) low-cost and passive system demonstrated performance comparable to high quality navigation-grade inertial navigation systems, which cost an order of magnitude more than this optical-inertial prototype. The resultant performance of the system tested provides a robust and practical navigation solution for Air Force aircraft

    Airborne vision-based attitude estimation and localisation

    Get PDF
    Vision plays an integral part in a pilot's ability to navigate and control an aircraft. Therefore Visual Flight Rules have been developed around the pilot's ability to see the environment outside of the cockpit in order to control the attitude of the aircraft, to navigate and to avoid obstacles. The automation of these processes using a vision system could greatly increase the reliability and autonomy of unmanned aircraft and flight automation systems. This thesis investigates the development and implementation of a robust vision system which fuses inertial information with visual information in a probabilistic framework with the aim of aircraft navigation. The horizon appearance is a strong visual indicator of the attitude of the aircraft. This leads to the first research area of this thesis, visual horizon attitude determination. An image processing method was developed to provide high performance horizon detection and extraction from camera imagery. A number of horizon models were developed to link the detected horizon to the attitude of the aircraft with varying degrees of accuracy. The second area investigated in this thesis was visual localisation of the aircraft. A terrain-aided horizon model was developed to estimate the position, altitude as well as attitude of the aircraft. This gives rough positions estimates with highly accurate attitude information. The visual localisation accuracy was improved by incorporating ground feature-based map-aided navigation. Road intersections were detected using a developed image processing algorithm and then they were matched to a database to provide positional information. The developed vision system show comparable performance to other non-vision-based systems while removing the dependence on external systems for navigation. The vision system and techniques developed in this thesis helps to increase the autonomy of unmanned aircraft and flight automation systems for manned flight
    corecore