341 research outputs found

    Urban Air Mobility System Testbed Using CAVE Virtual Reality Environment

    Get PDF
    Urban Air Mobility (UAM) refers to a system of air passenger and small cargo transportation within an urban area. The UAM framework also includes other urban Unmanned Aerial Systems (UAS) services that will be supported by a mix of onboard, ground, piloted, and autonomous operations. Over the past few years UAM research has gained wide interest from companies and federal agencies as an on-demand innovative transportation option that can help reduce traffic congestion and pollution as well as increase mobility in metropolitan areas. The concepts of UAM/UAS operation in the National Airspace System (NAS) remains an active area of research to ensure safe and efficient operations. With new developments in smart vehicle design and infrastructure for air traffic management, there is a need for methods to integrate and test various components of the UAM framework. In this work, we report on the development of a virtual reality (VR) testbed using the Cave Automatic Virtual Environment (CAVE) technology for human-automation teaming and airspace operation research of UAM. Using a four-wall projection system with motion capture, the CAVE provides an immersive virtual environment with real-time full body tracking capability. We created a virtual environment consisting of San Francisco city and a vertical take-off-and-landing passenger aircraft that can fly between a downtown location and the San Francisco International Airport. The aircraft can be operated autonomously or manually by a single pilot who maneuvers the aircraft using a flight control joystick. The interior of the aircraft includes a virtual cockpit display with vehicle heading, location, and speed information. The system can record simulation events and flight data for post-processing. The system parameters are customizable for different flight scenarios; hence, the CAVE VR testbed provides a flexible method for development and evaluation of UAM framework

    Vision-based Real-Time Aerial Object Localization and Tracking for UAV Sensing System

    Get PDF
    The paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared to existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.Comment: 8 pages, 7 figure

    SIGS: Synthetic Imagery Generating Software for the development and evaluation of vision-based sense-and-avoid systems

    Get PDF
    Unmanned Aerial Systems (UASs) have recently become a versatile platform for many civilian applications including inspection, surveillance and mapping. Sense-and-Avoid systems are essential for the autonomous safe operation of these systems in non-segregated airspaces. Vision-based Sense-and-Avoid systems are preferred to other alternatives as their price, physical dimensions and weight are more suitable for small and medium-sized UASs, but obtaining real flight imagery of potential collision scenarios is hard and dangerous, which complicates the development of Vision-based detection and tracking algorithms. For this purpose, user-friendly software for synthetic imagery generation has been developed, allowing to blend user-defined flight imagery of a simulated aircraft with real flight scenario images to produce realistic images with ground truth annotations. These are extremely useful for the development and benchmarking of Vision-based detection and tracking algorithms at a much lower cost and risk. An image processing algorithm has also been developed for automatic detection of the occlusions caused by certain parts of the UAV which carries the camera. The detected occlusions can later be used by our software to simulate the occlusions due to the UAV that would appear in a real flight with the same camera setup. Additionally this algorithm could be used to mask out pixels which do not contain relevant information of the scene for the visual detection, making the image search process more efficient. Finally an application example of the imagery obtained with our software for the benchmarking of a state-of-art visual tracker is presented

    MARIT : the design, implementation and trajectory generation with NTG for small UAVs.

    Get PDF
    This dissertation is about building a Multiple Air Robotics Indoor Testbed (MARIT) for the purpose of developing and validating new methodologies for collaboration and cooperation between heterogeneous Unmanned Air Vehicles (UAVs) as well as expandable to air-and-ground vehicle teams. It introduces a mathematical model for simulation and control of quadrotor Small UAVs (SUAVs). The model is subsequently applied to design an autonomous quadrotor control and tracking system. The dynamics model of quadrotor SUAV is used in several control designs. Each control design is simulated and compared. Based on the comparison, the superior control design is use for experimental flights. Two methods are used to evaluate the control and collect real-time data. The Nonlinear Trajectory Generation (NTG) software package is used to provide optimal trajectories for the SUAVs in MARIT. The dynamics model of the quadrotor is programmed in NTG and various obstacle avoidance scenarios are modeled to establish a platform for optimal trajectory generation for SUAVs. To challenge the capability of NTG for real-time trajectory generation, random obstacles and disturbances are simulated. Various flight simulations validate this trajectory tracking approach

    Requirement analysis and sensor specifications – First version

    Get PDF
    In this first version of the deliverable, we make the following contributions: to design the WEKIT capturing platform and the associated experience capturing API, we use a methodology for system engineering that is relevant for different domains such as: aviation, space, and medical and different professions such as: technicians, astronauts, and medical staff. Furthermore, in the methodology, we explore the system engineering process and how it can be used in the project to support the different work packages and more importantly the different deliverables that will follow the current. Next, we provide a mapping of high level functions or tasks (associated with experience transfer from expert to trainee) to low level functions such as: gaze, voice, video, body posture, hand gestures, bio-signals, fatigue levels, and location of the user in the environment. In addition, we link the low level functions to their associated sensors. Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their technical specifications, possible limitations, standards, and platforms. We outline a set of recommendations pertaining to the sensors that are most relevant for the WEKIT project taking into consideration the environmental, technical and human factors described in other deliverables. We recommend Microsoft Hololens (for Augmented reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift (for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand gesture tracking). For eye tracking, an existing eye-tracking system can be customised to complement the augmented reality glasses, and built-in microphone of the augmented reality glasses can capture the expert’s voice. We propose a modular approach for the design of the WEKIT experience capturing system, and recommend that the capturing system should have sufficient storage or transmission capabilities. Finally, we highlight common issues associated with the use of different sensors. We consider that the set of recommendations can be useful for the design and integration of the WEKIT capturing platform and the WEKIT experience capturing API to expedite the time required to select the combination of sensors which will be used in the first prototype.WEKI

    Vision based strategies for implementing Sense and Avoid capabilities onboard Unmanned Aerial Systems

    Get PDF
    Current research activities are worked out to develop fully autonomous unmanned platform systems, provided with Sense and Avoid technologies in order to achieve the access to the National Airspace System (NAS), flying with manned airplanes. The TECVOl project is set in this framework, aiming at developing an autonomous prototypal Unmanned Aerial Vehicle which performs Detect Sense and Avoid functionalities, by means of an integrated sensors package, composed by a pulsed radar and four electro-optical cameras, two visible and two Infra-Red. This project is carried out by the Italian Aerospace Research Center in collaboration with the Department of Aerospace Engineering of the University of Naples “Federico II”, which has been involved in the developing of the Obstacle Detection and IDentification system. Thus, this thesis concerns the image processing technique customized for the Sense and Avoid applications in the TECVOL project, where the EO system has an auxiliary role to radar, which is the main sensor. In particular, the panchromatic camera performs the aiding function of object detection, in order to increase accuracy and data rate performance of radar system. Therefore, the thesis describes the implemented steps to evaluate the most suitable panchromatic camera image processing technique for our applications, the test strategies adopted to study its performance and the analysis conducted to optimize it in terms of false alarms, missed detections and detection range. Finally, results from the tests will be explained, and they will demonstrate that the Electro-Optical sensor is beneficial to the overall Detect Sense and Avoid system; in fact it is able to improve upon it, in terms of object detection and tracking performance
    • …
    corecore