1,074 research outputs found

    Aerodrome situational awareness of unmanned aircraft: an integrated self-learning approach with Bayesian network semantic segmentation

    Get PDF
    It is expected that soon there will be a significant number of unmanned aerial vehicles (UAVs) operating side-by-side with manned civil aircraft in national airspace systems. To be able to integrate UAVs safely with civil traffic, a number of challenges must be overcome first. This paper investigates situational awareness of UAVs’ autonomous taxiing in an aerodrome environment. The research work is based on a real outdoor experimental data collected at the Walney Island Airport, the United Kingdom. It aims to further develop and test UAVs’ autonomous taxiing in a challenging outdoor environment. To address various practical issues arising from the outdoor aerodrome such as camera vibration, taxiway feature extraction and unknown obstacles, we develop an integrated approach that combines the Bayesian-network based semantic segmentation with a self-learning method to enhance situational awareness of UAVs. Detailed analysis for the outdoor experimental data shows that the integrated method developed in this paper improves robustness of situational awareness for autonomous taxiing

    A Neural Theory of Attentive Visual Search: Interactions of Boundary, Surface, Spatial, and Object Representations

    Full text link
    Visual search data are given a unified quantitative explanation by a model of how spatial maps in the parietal cortex and object recognition categories in the inferotemporal cortex deploy attentional resources as they reciprocally interact with visual representations in the prestriate cortex. The model visual representations arc organized into multiple boundary and surface representations. Visual search in the model is initiated by organizing multiple items that lie within a given boundary or surface representation into a candidate search grouping. These items arc compared with object recognition categories to test for matches or mismatches. Mismatches can trigger deeper searches and recursive selection of new groupings until a target object io identified. This search model is algorithmically specified to quantitatively simulate search data using a single set of parameters, as well as to qualitatively explain a still larger data base, including data of Aks and Enns (1992), Bravo and Blake (1990), Chellazzi, Miller, Duncan, and Desimone (1993), Egeth, Viri, and Garbart (1984), Cohen and Ivry (1991), Enno and Rensink (1990), He and Nakayarna (1992), Humphreys, Quinlan, and Riddoch (1989), Mordkoff, Yantis, and Egeth (1990), Nakayama and Silverman (1986), Treisman and Gelade (1980), Treisman and Sato (1990), Wolfe, Cave, and Franzel (1989), and Wolfe and Friedman-Hill (1992). The model hereby provides an alternative to recent variations on the Feature Integration and Guided Search models, and grounds the analysis of visual search in neural models of preattentive vision, attentive object learning and categorization, and attentive spatial localization and orientation.Air Force Office of Scientific Research (F49620-92-J-0499, 90-0175, F49620-92-J-0334); Advanced Research Projects Agency (AFOSR 90-0083, ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100); Northeast Consortium for Engineering Education (NCEE/A303/21-93 Task 0021); British Petroleum (89-A-1204); National Science Foundation (NSF IRI-90-00530

    Project RISE: Recognizing Industrial Smoke Emissions

    Full text link
    Industrial smoke emissions pose a significant concern to human health. Prior works have shown that using Computer Vision (CV) techniques to identify smoke as visual evidence can influence the attitude of regulators and empower citizens to pursue environmental justice. However, existing datasets are not of sufficient quality nor quantity to train the robust CV models needed to support air quality advocacy. We introduce RISE, the first large-scale video dataset for Recognizing Industrial Smoke Emissions. We adopted a citizen science approach to collaborate with local community members to annotate whether a video clip has smoke emissions. Our dataset contains 12,567 clips from 19 distinct views from cameras that monitored three industrial facilities. These daytime clips span 30 days over two years, including all four seasons. We ran experiments using deep neural networks to establish a strong performance baseline and reveal smoke recognition challenges. Our survey study discussed community feedback, and our data analysis displayed opportunities for integrating citizen scientists and crowd workers into the application of Artificial Intelligence for social good.Comment: Technical repor
    • …
    corecore