147 research outputs found

    An Observer for Estimating Translational Velocity from Optic Flow and Radar

    Get PDF
    This thesis presents the development of a discrete time observer for estimating state information from optic flow and radar measurements. It is shown that estimates of translational and rotational speed can be extracted using a least squares inversion for wide fields of view or, with the addition of a Kalman Filter, for small fields of view. The approach is demonstrated in a simulated three dimensional urban environment on an autonomous quadrotor micro-air-vehicle (MAV). A state feedback control scheme is designed, whereby the gains are found via static H∞, and implemented to allow trajectory following. The proposed state estimation scheme and feedback method are shown to be sufficient for enabling autonomous navigation of an MAV. The resulting methodology has the advantages of computational speed and simplicity, both of which are imperative for implementation on MAVs due to stringent size, weight, and power requirements

    Vision systems for autonomous aircraft guidance

    Get PDF

    Bio-Inspired Information Extraction In 3-D Environments Using Wide-Field Integration Of Optic Flow

    Get PDF
    A control theoretic framework is introduced to analyze an information extraction approach from patterns of optic flow based on analogues to wide-field motion-sensitive interneurons in the insect visuomotor system. An algebraic model of optic flow is developed, based on a parameterization of simple 3-D environments. It is shown that estimates of proximity and speed, relative to these environments, can be extracted using weighted summations of the instantaneous patterns of optic flow. Small perturbation techniques are utilized to link weighting patterns to outputs, which are applied as feedback to facilitate stability augmentation and perform local obstacle avoidance and terrain following. Weighting patterns that provide direct linear mappings between the sensor array and actuator commands can be derived by casting the problem as a combined static state estimation and linear feedback control problem. Additive noise and environment uncertainties are incorporated into an offline procedure for determination of optimal weighting patterns. Several applications of the method are provided, with differing spatial measurement domains. Non-linear stability analysis and experimental demonstration is presented for a wheeled robot measuring optic flow in a planar ring. Local stability analysis and simulation is used to show robustness over a range of urban-like environments for a fixed-wing UAV measuring in orthogonal rings and a micro helicopter measuring over the full spherical viewing arena. Finally, the framework is used to analyze insect tangential cells with respect to the information they encode and to demonstrate how cell outputs can be appropriately amplified and combined to generate motor commands to achieve reflexive navigation behavior

    Vision-based control of near-obstacle flight

    Get PDF
    This paper presents a novel control strategy, which we call optiPilot, for autonomous flight in the vicinity of obstacles. Most existing autopilots rely on a complete 6-degree-of-freedom state estimation using a GPS and an Inertial Measurement Unit (IMU) and are unable to detect and avoid obstacles. This is a limitation for missions such as surveillance and environment monitoring that may require near-obstacle flight in urban areas or mountainous environments. OptiPilot instead uses optic flow to estimate proximity of obstacles and avoid them. Our approach takes advantage of the fact that, for most platforms in translational flight (as opposed to near-hover flight), the translatory motion is essentially aligned with the aircraft main axis. This property allows us to directly interpret optic flow measurements as proximity indications. We take inspiration from neural and behavioural strategies of flying insects to propose a simple mapping of optic flow measurements into control signals that requires only a lightweight and power-efficient sensor suite and minimal processing power. In this paper, we first describe results obtained in simulation before presenting the implementation of optiPilot on a real flying platform equipped only with lightweight and inexpensive optic computer mouse sensors, MEMS rate gyroscopes and a pressure-based airspeed sensor. We show that the proposed control strategy not only allows collision-free flight in the vicinity of obstacles, but is also able to stabilise both attitude and altitude over flat terrain. These results shed new light on flight control by suggesting that the complex sensors and processing required for 6 degree-of-freedom state estimation may not be necessary for autonomous flight and pave the way toward the integration of autonomy into current and upcoming gram-scale flying platform

    BIO-INSPIRED DISTURBANCE REJECTION WITH OCELLAR AND DISTRIBUTED ACCELERATION SENSING FOR SMALL UNMANNED AIRCRAFT SYSTEMS

    Get PDF
    Rapid sensing of body motions is critical to stabilizing a flight vehicle in the presence of exogenous disturbances as well as providing high performance tracking of desired control commands. This bandwidth requirement becomes more stringent as vehicle scale decreases. In many flying insects three simple eyes, known as the ocelli, operate as low latency visual egomotion sensors. Furthermore many flying insects employ distributed networks of acceleration-sensitive sensors to provide information about body egomotion to rapidly detect external forces and torques. In this work, simulation modeling of the ocelli visual system common to flying insects was performed based on physiological and behavioral data. Linear state estimation matrices were derived from the measurement models to form estimates of egomotion states. A fully analog ocellar sensor was designed and constructed based on these models, producing state estimation outputs. These analog state estimate outputs were characterized in the presence of egomotion stimuli. Feedback from the ocellar sensor, with and without complementary input from optic flow sensors, was implemented on a quadrotor to perform stabilization and disturbance rejection. The performance of the closed loop sensor feedback was compared to baseline inertial feedback. A distributed array of digital accelerometers was constructed to sense rapid force and torque measurements. The response of the array to induced motion stimuli was characterized and an automated calibration algorithm was formulated to estimate sensor position and orientation. A linear state estimation matrix was derived from the calibration to directly estimate forces and torques. The force and torque estimates provided by the sensor network were used to augment the quadrotor inner loop controller to improve tracking of desired commands in the presence of exogenous force and torque disturbances with a force-adaptive feedback control

    BIO-INSPIRED MOTION PERCEPTION: FROM GANGLION CELLS TO AUTONOMOUS VEHICLES

    Get PDF
    Animals are remarkable at navigation, even in extreme situations. Through motion perception, animals compute their own movements (egomotion) and find other objects (prey, predator, obstacles) and their motions in the environment. Analogous to animals, artificial systems such as robots also need to know where they are relative to structure and segment obstacles to avoid collisions. Even though substantial progress has been made in the development of artificial visual systems, they still struggle to achieve robust and generalizable solutions. To this end, I propose a bio-inspired framework that narrows the gap between natural and artificial systems. The standard approaches in robot motion perception seek to reconstruct a three-dimensional model of the scene and then use this model to estimate egomotion and object segmentation. However, the scene reconstruction process is data-heavy and computationally expensive and fails to deal with high-speed and dynamic scenarios. On the contrary, biological visual systems excel in the aforementioned difficult situation by extracting only minimal information sufficient for motion perception tasks. I derive minimalist/purposive ideas from biological processes throughout this thesis and develop mathematical solutions for robot motion perception problems. In this thesis, I develop a full range of solutions that utilize bio-inspired motion representation and learning approaches for motion perception tasks. Particularly, I focus on egomotion estimation and motion segmentation tasks. I have four main contributions: 1. First, I introduce NFlowNet, a neural network to estimate normal flow (bio-inspired motion filters). Normal flow estimation presents a new avenue for solving egomotion in a robust and qualitative framework. 2. Utilizing normal flow, I propose the DiffPoseNet framework to estimate egomotion by formulating the qualitative constraint in a differentiable optimization layer, which allows for end-to-end learning. 3. Further, utilizing a neuromorphic event camera, a retina-inspired vision sensor, I develop 0-MMS, a model-based optimization approach that employs event spikes to segment the scene into multiple moving parts in high-speed dynamic lighting scenarios. 4. To improve the precision of event-based motion perception across time, I develop SpikeMS, a novel bio-inspired learning approach that fully capitalizes on the rich temporal information in event spikes

    Fusion of Imaging and Inertial Sensors for Navigation

    Get PDF
    The motivation of this research is to address the limitations of satellite-based navigation by fusing imaging and inertial systems. The research begins by rigorously describing the imaging and navigation problem and developing practical models of the sensors, then presenting a transformation technique to detect features within an image. Given a set of features, a statistical feature projection technique is developed which utilizes inertial measurements to predict vectors in the feature space between images. This coupling of the imaging and inertial sensors at a deep level is then used to aid the statistical feature matching function. The feature matches and inertial measurements are then used to estimate the navigation trajectory using an extended Kalman filter. After accomplishing a proper calibration, the image-aided inertial navigation algorithm is then tested using a combination of simulation and ground tests using both tactical and consumer- grade inertial sensors. While limitations of the Kalman filter are identified, the experimental results demonstrate a navigation performance improvement of at least two orders of magnitude over the respective inertial-only solutions
    • …
    corecore