453 research outputs found

    EXPEDITIONARY LOGISTICS: A LOW-COST, DEPLOYABLE, UNMANNED AERIAL SYSTEM FOR AIRFIELD DAMAGE ASSESSMENT

    Get PDF
    Airfield Damage Repair (ADR) is among the most important expeditionary activities for our military. The goal of ADR is to restore a damaged airfield to operational status as quickly as possible. Before the process of ADR can begin, however, the damage to the airfield needs to be assessed. As a result, Airfield Damage Assessment (ADA) has received considerable attention. Often in a damaged airfield, there is an expectation of unexploded ordnance, which makes ADA a slow, difficult, and dangerous process. For this reason, it is best to make ADA completely unmanned and automated. Additionally, ADA needs to be executed as quickly as possible so that ADR can begin and the airfield restored to a usable condition. Among other modalities, tower-based monitoring and remote sensor systems are often used for ADA. There is now an opportunity to investigate the use of commercial-off-the-shelf, low-cost, automated sensor systems for automatic damage detection. By developing a combination of ground-based and Unmanned Aerial Vehicle sensor systems, we demonstrate the completion of ADA in a safe, efficient, and cost-effective manner.http://archive.org/details/expeditionarylog1094561346Outstanding ThesisLieutenant, United States NavyApproved for public release; distribution is unlimited

    RS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation Model

    Full text link
    Pre-trained Vision-Language Foundation Models utilizing extensive image-text paired data have demonstrated unprecedented image-text association capabilities, achieving remarkable results across various downstream tasks. A critical challenge is how to make use of existing large-scale pre-trained VLMs, which are trained on common objects, to perform the domain-specific transfer for accomplishing domain-related downstream tasks. In this paper, we propose a new framework that includes the Domain Foundation Model (DFM), bridging the gap between the General Foundation Model (GFM) and domain-specific downstream tasks. Moreover, we present an image-text paired dataset in the field of remote sensing (RS), RS5M, which has 5 million RS images with English descriptions. The dataset is obtained from filtering publicly available image-text paired datasets and captioning label-only RS datasets with pre-trained VLM. These constitute the first large-scale RS image-text paired dataset. Additionally, we tried several Parameter-Efficient Fine-Tuning methods on RS5M to implement the DFM. Experimental results show that our proposed dataset are highly effective for various tasks, improving upon the baseline by 8%∼16%8 \% \sim 16 \% in zero-shot classification tasks, and obtaining good results in both Vision-Language Retrieval and Semantic Localization tasks. \url{https://github.com/om-ai-lab/RS5M}Comment: RS5M dataset v

    LARD -- Landing Approach Runway Detection -- Dataset for Vision Based Landing

    Full text link
    As the interest in autonomous systems continues to grow, one of the major challenges is collecting sufficient and representative real-world data. Despite the strong practical and commercial interest in autonomous landing systems in the aerospace field, there is a lack of open-source datasets of aerial images. To address this issue, we present a dataset-lard-of high-quality aerial images for the task of runway detection during approach and landing phases. Most of the dataset is composed of synthetic images but we also provide manually labelled images from real landing footages, to extend the detection task to a more realistic setting. In addition, we offer the generator which can produce such synthetic front-view images and enables automatic annotation of the runway corners through geometric transformations. This dataset paves the way for further research such as the analysis of dataset quality or the development of models to cope with the detection tasks. Find data, code and more up-to-date information at https://github.com/deel-ai/LAR

    A DEEP LEARNING APPROACH FOR AIRPORT RUNWAY IDENTIFICATION FROM SATELLITE IMAGERY

    Get PDF
    The United States lacks a comprehensive national database of private Prior Permission Required (PPR) airports. The primary reason such a database does not exist is that there are no federal regulatory obligations for these facilities to have their information re-evaluated or updated by the Federal Aviation Administration (FAA) or the local state Department of Transportation (DOT) once the data has been entered into the system. The often outdated and incorrect information about landing sites presents a serious risk factor in aviation safety. In this thesis, we present a machine learning approach for detecting airport landing sites from Google Earth satellite imagery. The approach presented in this thesis plays a crucial role in confirming the FAA\u27s current database and improving aviation safety in the United States. Specifically, we designed, implemented, and evaluated object detection and segmentation techniques for identifying and segmenting the regions of interest in image data. The in-house dataset has been thoroughly annotated that includes 400 satellite images with a total of 700 instances of runways. The images - acquired via Google Maps static API - are 3000x3000 pixels in size. The models were trained using two distinct backbones on a Mask R-CNN architecture: ResNet101, and ResneXt101, and obtained the highest average precision score @0.75 with ResNet-101 at 92% and recall at 89%. We finally hosted the model in the StreamLit front-end platform, allowing users to enter any location to check and confirm the presence of a runway

    Remote Sensing Image Scene Classification: Benchmark and State of the Art

    Full text link
    Remote sensing image scene classification plays an important role in a wide range of applications and hence has been receiving remarkable attention. During the past years, significant efforts have been made to develop various datasets or present a variety of approaches for scene classification from remote sensing images. However, a systematic review of the literature concerning datasets and methods for scene classification is still lacking. In addition, almost all existing datasets have a number of limitations, including the small scale of scene classes and the image numbers, the lack of image variations and diversity, and the saturation of accuracy. These limitations severely limit the development of new approaches especially deep learning-based methods. This paper first provides a comprehensive review of the recent progress. Then, we propose a large-scale dataset, termed "NWPU-RESISC45", which is a publicly available benchmark for REmote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images in each class. The proposed NWPU-RESISC45 (i) is large-scale on the scene classes and the total image number, (ii) holds big variations in translation, spatial resolution, viewpoint, object pose, illumination, background, and occlusion, and (iii) has high within-class diversity and between-class similarity. The creation of this dataset will enable the community to develop and evaluate various data-driven algorithms. Finally, several representative methods are evaluated using the proposed dataset and the results are reported as a useful baseline for future research.Comment: This manuscript is the accepted version for Proceedings of the IEE
    • …
    corecore