582 research outputs found

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    Expanding Navigation Systems by Integrating It with Advanced Technologies

    Get PDF
    Navigation systems provide the optimized route from one location to another. It is mainly assisted by external technologies such as Global Positioning System (GPS) and satellite-based radio navigation systems. GPS has many advantages such as high accuracy, available anywhere, reliable, and self-calibrated. However, GPS is limited to outdoor operations. The practice of combining different sources of data to improve the overall outcome is commonly used in various domains. GIS is already integrated with GPS to provide the visualization and realization aspects of a given location. Internet of things (IoT) is a growing domain, where embedded sensors are connected to the Internet and so IoT improves existing navigation systems and expands its capabilities. This chapter proposes a framework based on the integration of GPS, GIS, IoT, and mobile communications to provide a comprehensive and accurate navigation solution. In the next section, we outline the limitations of GPS, and then we describe the integration of GIS, smartphones, and GPS to enable its use in mobile applications. For the rest of this chapter, we introduce various navigation implementations using alternate technologies integrated with GPS or operated as standalone devices

    Adaptive Airborne Separation to Enable UAM Autonomy in Mixed Airspace

    Get PDF
    The excitement and promise generated by Urban Air Mobility (UAM) concepts have inspired both new entrants and large aerospace companies throughout the world to invest hundreds of millions in research and development of air vehicles, both piloted and unpiloted, to fulfill these dreams. The management and separation of all these new aircraft have received much less attention, however, and even though NASAs lead is advancing some promising concepts for Unmanned Aircraft Systems (UAS) Traffic Management (UTM), most operations today are limited to line of sight with the vehicle, airspace reservation and geofencing of individual flights. Various schemes have been proposed to control this new traffic, some modeled after conventional air traffic control and some proposing fully automatic management, either from a ground-based entity or carried out on board among the vehicles themselves. Previous work has examined vehicle-based traffic management in the very low altitude airspace within a metroplex called UTM airspace in which piloted traffic is rare. A management scheme was proposed in that work that takes advantage of the homogeneous nature of the traffic operating in UTM airspace. This paper expands that concept to include a traffic management plan usable at all altitudes desired for electric Vertical Takeoff and Landing urban and short-distance, inter-city transportation. The interactions with piloted aircraft operating under both visual and instrument flight rules are analyzed, and the role of Air Traffic Control services in the postulated mixed traffic environment is covered. Separation values that adapt to each type of traffic encounter are proposed, and the relationship between required airborne surveillance range and closure speed is given. Finally, realistic scenarios are presented illustrating how this concept can reliably handle the density and traffic mix that fully implemented and successful UAM operations would entail

    Flying Animal Inspired Behavior-Based Gap-Aiming Autonomous Flight with a Small Unmanned Rotorcraft in a Restricted Maneuverability Environment

    Get PDF
    This dissertation research shows a small unmanned rotorcraft system with onboard processing and a vision sensor can produce autonomous, collision-free flight in a restricted maneuverability environment with no a priori knowledge by using a gap-aiming behavior inspired by flying animals. Current approaches to autonomous flight with small unmanned aerial systems (SUAS) concentrate on detecting and explicitly avoiding obstacles. In contrast, biology indicates that birds, bats, and insects do the opposite; they react to open spaces, or gaps in the environment, with a gap_aiming behavior. Using flying animals as inspiration a behavior-based robotics approach is taken to implement and test their observed gap-aiming behavior in three dimensions. Because biological studies were unclear whether the flying animals were reacting to the largest gap perceived, the closest gap perceived, or all of the gaps three approaches for the perceptual schema were explored in simulation: detect_closest_gap, detect_largest_gap, and detect_all_gaps. The result of these simulations was used in a proof-of-concept implementation on a 3DRobotics Solo quadrotor platform in an environment designed to represent the navigational diffi- culties found inside a restricted maneuverability environment. The motor schema is implemented with an artificial potential field to produce the action of aiming to the center of the gap. Through two sets of field trials totaling fifteen flights conducted with a small unmanned quadrotor, the gap-aiming behavior observed in flying animals is shown to produce repeatable autonomous, collision-free flight in a restricted maneuverability environment. Additionally, using the distance from the starting location to perceived gaps, the horizontal and vertical distance traveled, and the distance from the center of the gap during traversal the implementation of the gap selection approach performs as intended, the three-dimensional movement produced by the motor schema and the accuracy of the motor schema are shown, respectively. This gap-aiming behavior provides the robotics community with the first known implementation of autonomous, collision-free flight on a small unmanned quadrotor without explicit obstacle detection and avoidance as seen with current implementations. Additionally, the testing environment described by quantitative metrics provides a benchmark for autonomous SUAS flight testing in confined environments. Finally, the success of the autonomous collision-free flight implementation on a small unmanned rotorcraft and field tested in a restricted maneuverability environment could have important societal impact in both the public and private sectors

    Obstacle Avoidance Using Convolutional Neural Network For Drone Navigation In Oil Palm Plantation

    Get PDF
    In Malaysia, oil palm plantation is one of the vital sectors that contribute to the country economy. In recent years, drones are widely applied in the precision agriculture due to their flexibility and capability. However, one of the challenges in a low-altitude flight mission is the ability to avoid the obstacles in order to prevent the drone crashes. Most of the previous literature demonstrated the obstacle avoidance systems with active sensors which are not applicable on small aerial vehicles due to the cost, weight and power consumption constraints. In this research, we present a novel system that enables the autonomous navigation of a small drone in the oil palm plantation using a single camera only. The system is divided into two main stages: vision-based obstacle detection, in which the obstacles in the input images are detected, and motion control, in which the avoidance decisions are taken based on the results from the first stage. As the monocular vision does not provide depth information, a machine learning model, Faster R-CNN, was trained and adapted for the tree trunk detection. Subsequently, the heights of the predicted bounding boxes were used to indicate their estimated distances from the drone. The detection model performance was validated on the testing images in term of the average precision. In the system, the drone is programmed to move forward until the detection model detects any closed frontal obstacle. Next, the avoidance motion direction is defined by commanding a yawing angle which is corresponded to the x-coordinate in the image that indicated the optimum path direction with the widest obstacle-free space. We demonstrated the performance of the system by carrying out flight tests in the real oil palm plantation environment in two different locations, where one of them is a new place. The results showed that the proposed method was accurate and robust for the drone vision-based autonomous navigation in the oil palm plantation

    Drone Obstacle Avoidance and Navigation Using Artificial Intelligence

    Get PDF
    This thesis presents an implementation and integration of a robust obstacle avoidance and navigation module with ardupilot. It explores the problems in the current solution of obstacle avoidance and tries to mitigate it with a new design. With the recent innovation in artificial intelligence, it also explores opportunities to enable and improve the functionalities of obstacle avoidance and navigation using AI techniques. Understanding different types of sensors for both navigation and obstacle avoidance is required for the implementation of the design and a study of the same is presented as a background. A research on an autonomous car is done for better understanding autonomy and learning how it is solving the problem of obstacle avoidance and navigation. The implementation part of the thesis is focused on the design of a robust obstacle avoidance module and is tested with obstacle avoidance sensors such as Garmin lidar and Realsense r200. Image segmentation is used to verify the possibility of using the convolutional neural network for better understanding the nature of obstacles. Similarly, the end to end control with a single camera input using a deep neural network is used for verifying the possibility of using AI for navigation. In the end, a robust obstacle avoidance library is developed and tested both in the simulator and real drone. Image segmentation is implemented, deployed and tested. A possibility of an end to end control is also verified by obtaining a proof of concept

    Fire detection of Unmanned Aerial Vehicle in a Mixed Reality-based System

    Get PDF
    This paper proposes the employment of a low-cost Micro-electro-mechanical system including; inertial measurement unit (IMU), a consumer-grade digital camera and a fire detection algorithm with a nano unmanned aerial vehicle for inspection application. The video stream (monocular camera) and navigation data (IMU) rely on state-of-the-art indoor/outdoor navigation system. The system combines robotic operating system and computer vision techniques to render metric scale of monocular vision and gravity observable to provide robust, accurate and novel inter-frame motion estimates. The collected onboard data are communicated to the ground station and processed using a Simultaneous Localisation and Mapping (SLAM) system. A robust and efficient re-localisation SLAM was performed to recover from tracking failure, motion blur and frame lost in the received data. The fire detection algorithm was deployed based on the colour, movement attributes, temporal variation of fire's intensity and its accumulation around a point. A cumulative time derivative matrix was used to detect areas with fire's high-frequency luminance flicker (random characteristic) to analyse the frame-by-frame changes. We considered colour, surface coarseness, boundary roughness and skewness features while the quadrotor flies autonomously within clutter and congested areas. Mixed Reality system was adopted to visualise and test the proposed system in a physical/virtual environment. The results showed that the UAV could successfully detect fire and flame, fly towards and hover around it, communicate with the ground station and generate SLAM system

    Managing power amongst a group of networked embedded fpgas using dynamic reconfiguration and task migration

    Get PDF
    Small unpiloted aircraft (UAVs) each have limited power budgets. If a group (swarm) of small UAVs is organized to perform a common task such as geo-location then it is possible to share the total power across the group by introducing task mobility inside the group supported by an ad hoc wireless network (where the communication encoding/decodeing is also done on fpgas). In this presentation I will describe research into the construction of a distributed operating system where partial dynamic reconfiguration and network mobility are combined so that fpga tasks can be moved to make the best use of the total power available in a swarm of UAVs
    corecore