28,081 research outputs found

    Mapping heterogeneous buried archaeological features using multisensor data from unmanned aerial vehicles

    Get PDF
    There is a long history of the use of aerial imagery for archaeological research, but the application of multisensor image data has only recently been facilitated by the development of unmanned aerial vehicles (UAVs). Two archaeological sites in the East Midlands U.K. that differ in age and topography were selected for survey using multisensor imaging from a fixed-wing UAV. The aim of this study was to determine optimum methodology for the use of UAVs in examining archaeological sites that have no obvious surface features and examine issues of ground control target design, thermal effects, image processing and advanced filtration. The information derived from the range of sensors used in this study enabled interpretation of buried archaeology at both sites. For any archaeological survey using UAVs, the acquisition of visible colour (RGB), multispectral, and thermal imagery as a minimum are advised, as no single technique is sufficient to attempt to reveal the maximum amount of potential information

    Trying to break new ground in aerial archaeology

    Get PDF
    Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection

    Pilot investigation of remote sensing for intertidal oyster mapping in coastal South Carolina: a methods comparison

    Get PDF
    South Carolina’s oyster reefs are a major component of the coastal landscape. Eastern oysters Crassostrea virginica are an important economic resource to the state and serve many essential functions in the environment, including water filtration, creek bank stabilization and habitat for other plants and animals. Effective conservation and management of oyster reefs is dependent on an understanding of their abundance, distribution, condition, and change over time. In South Carolina, over 95% of the state’s oyster habitat is intertidal. The current intertidal oyster reef database for South Carolina was developed by field assessment over several years. This database was completed in the early 1980s and is in need of an update to assess resource/habitat status and trends across the state. Anthropogenic factors such as coastal development and associated waterway usage (e.g., boat wakes) are suspected of significantly altering the extent and health of the state’s oyster resources. In 2002 the NOAA Coastal Services Center’s (Center) Coastal Remote Sensing Program (CRS) worked with the Marine Resources Division of the South Carolina Department of Natural Resources (SCDNR) to develop methods for mapping intertidal oyster reefs along the South Carolina coast using remote sensing technology. The objective of this project was to provide SCDNR with potential methodologies and approaches for assessing oyster resources in a more efficiently than could be accomplished through field digitizing. The project focused on the utility of high-resolution aerial imagery and on documenting the effectiveness of various analysis techniques for accomplishing the update. (PDF contains 32 pages

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps

    Full text link
    Hyperspectral cameras can provide unique spectral signatures for consistently distinguishing materials that can be used to solve surveillance tasks. In this paper, we propose a novel real-time hyperspectral likelihood maps-aided tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving object tracking system generally consists of registration, object detection, and tracking modules. We focus on the target detection part and remove the necessity to build any offline classifiers and tune a large amount of hyperparameters, instead learning a generative target model in an online manner for hyperspectral channels ranging from visible to infrared wavelengths. The key idea is that, our adaptive fusion method can combine likelihood maps from multiple bands of hyperspectral imagery into one single more distinctive representation increasing the margin between mean value of foreground and background pixels in the fused map. Experimental results show that the HLT not only outperforms all established fusion methods but is on par with the current state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and Pattern Recognition Workshops, 201
    • …
    corecore