31 research outputs found

    NeMO-Net The Neural Multi-Modal Observation & Training Network for Global Coral Reef Assessment

    Get PDF
    We present NeMO-Net, the Srst open-source deep convolutional neural network (CNN) and interactive learning and training software aimed at assessing the present and past dynamics of coral reef ecosystems through habitat mapping into 10 biological and physical classes. Shallow marine systems, particularly coral reefs, are under significant pressures due to climate change, ocean acidification, and other anthropogenic pressures, leading to rapid, often devastating changes, in these fragile and diverse ecosystems. Historically, remote sensing of shallow marine habitats has been limited to meter-scale imagery due to the optical effects of ocean wave distortion, refraction, and optical attenuation. NeMO-Net combines 3D cm-scale distortion-free imagery captured using NASA FluidCam and Fluid lensing remote sensing technology with low resolution airborne and spaceborne datasets of varying spatial resolutions, spectral spaces, calibrations, and temporal cadence in a supercomputer-based machine learning framework. NeMO-Net augments and improves the benthic habitat classification accuracy of low-resolution datasets across large geographic ad temporal scales using high-resolution training data from FluidCam.NeMO-Net uses fully convolutional networks based upon ResNet and ReSneNet to perform semantic segmentation of remote sensing imagery of shallow marine systems captured by drones, aircraft, and satellites, including WorldView and Sentinel. Deep Laplacian Pyramid Super-Resolution Networks (LapSRN) alongside Domain Adversarial Neural Networks (DANNs) are used to reconstruct high resolution information from low resolution imagery, and to recognize domain-invariant features across datasets from multiple platforms to achieve high classification accuracies, overcoming inter-sensor spatial, spectral and temporal variations.Finally, we share our online active learning and citizen science platform, which allows users to provide interactive training data for NeMO-Net in 2D and 3D, integrated within a deep learning framework. We present results from the PaciSc Islands including Fiji, Guam and Peros Banhos 1 1 2 1 3 1 where 24-class classification accuracy exceeds 91%

    NeMO-Net - The Neural Multi-Modal Observation & Training Network for Global Coral Reef Assessment

    Get PDF
    In the past decade, coral reefs worldwide have experienced unprecedented stresses due to climate change, ocean acidification, and anthropomorphic pressures, instigating massive bleaching and die-off of these fragile and diverse ecosystems. Furthermore, remote sensing of these shallow marine habitats is hindered by ocean wave distortion, refraction and optical attenuation, leading invariably to data products that are often of low resolution and signal-to-noise (SNR) ratio. However, recent advances in UAV and Fluid Lensing technology have allowed us to capture multispectral 3D imagery of these systems at sub-cm scales from above the water surface, giving us an unprecedented view of their growth and decay. By combining spatial and spectral information from varying resolutions, we seek to augment and improve the classification accuracy of previously low-resolution datasets at large temporal scales.NeMO-Net, the first open-source deep convolutional neural network (CNN) and interactive learning and training software, currently being developed at NASA Ames, is aimed at assessing the present and past dynamics of coral reef ecosystems through determination of percent living cover and morphology. The latest iteration uses fully convolutional networks to segment and identify coral imagery taken by UAVs and satellites, including WorldView-2 and Sentinel. We present results taken from the Indian Ocean where classification accuracy has exceeded 91% for 24 geomorphological classes given ample training data. In addition, we utilize deep Laplacian Pyramid Super-Resolution Networks (LapSRN) to reconstruct high resolution information from low resolution imagery, trained from various UAV and satellite datasets. Finally, in the case of insufficient training data, we have developed an interactive online platform that allows users to easily segment and submit their classifications, which has been integrated with the current NeMO-Net workflow. Specifically, we present results from the Fiji islands in which preliminary user data has allowed for the accurate identification of 9 separate classes, despite issues such as cloud shadowing and spectral variation. The project is being supported by NASA's Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST-16) Program

    Ultra-Stable Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (5STAR)

    Get PDF
    The Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) combines airborne sun tracking and sky scanning with diffraction spectroscopy to improve knowledge of atmospheric constituents and their links to airpollution and climate. Direct beam hyperspectral measurement of optical depth improves retrievals of gas constituentsand determination of aerosol properties. Sky scanning enhances retrievals of aerosol type and size distribution.Hyperspectral cloud-transmitted radiance measurements enable the retrieval of cloud properties from below clouds.These measurements tighten the closure between satellite and ground-based measurements. 4STAR incorporates amodular sun-tracking sky-scanning optical head with optical fiber signal transmission to rack mounted spectrometers,permitting miniaturization of the external optical tracking head, and future detector evolution.4STAR has supported a broad range of flight experiments since it was first flown in 2010. This experience provides thebasis for a series of improvements directed toward reducing measurement uncertainty and calibration complexity, andexpanding future measurement capabilities, to be incorporated into a new 5STAR instrument. A 9-channel photodioderadiometer with AERONET-matched bandpass filters will be incorporated to improve calibration stability. A wide dynamic range tracking camera will provide a high precision solar position tracking signal as well as an image of sky conditions around the solar axis. An ultrasonic window cleaning system design will be tested. A UV spectrometer tailored for formaldehyde and SO2 gas retrievals will be added to the spectrometer enclosure. Finally, expansion capability for a 4 channel polarized radiometer to measure the Stokes polarization vector of sky light will be incorporated. This paper presents initial progress on this next-generation 5STAR instrument

    Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) Instrument Improvements

    Get PDF
    The Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) combines airborne sun tracking and sky scanning with grating spectroscopy to improve knowledge of atmospheric constituents and their links to air-pollution and climate. Hyper-spectral measurements of direct-beam solar irradiance provide retrievals of gas constituents, aerosol optical depth, and aerosol and thin cloud optical properties. Sky radiance measurements in the principal and almucantar planes enhance retrievals of aerosol absorption, aerosol type, and size mode distribution. Zenith radiance measurements are used to retrieve cloud properties and phase, which in turn are used to quantify the radiative transfer below cloud layers. These airborne measurements tighten the closure between satellite and ground-based measurements. In contrast to the Ames Airborne Tracking Sunphotometer (AATS-14) predecessor instrument, new technologies for each subsystem have been incorporated into 4STAR. In particular, 4STAR utilizes a modular sun-trackingsky-scanning optical head with fiber optic signal transmission to rack mounted spectrometers, permitting miniaturization of the external optical head, and spectrometerdetector configurations that may be tailored for specific scientific objectives. This paper discusses technical challenges relating to compact optical collector design, radiometric dynamic range and stability, and broad spectral coverage at high resolution. Test results benchmarking the performance of the instrument against the AATS-14 standard and emerging science requirements are presented

    On the differences in the vertical distribution of modeled aerosol optical depth over the southeastern Atlantic

    Get PDF
    The southeastern Atlantic is home to an expansive smoke aerosol plume overlying a large cloud deck for approximately a third of the year. The aerosol plume is mainly attributed to the extensive biomass burning activities that occur in southern Africa. Current Earth system models (ESMs) reveal significant differences in their estimates of regional aerosol radiative effects over this region. Such large differences partially stem from uncertainties in the vertical distribution of aerosols in the troposphere. These uncertainties translate into different aerosol optical depths (AODs) in the planetary boundary layer (PBL) and the free troposphere (FT). This study examines differences of AOD fraction in the FT and AOD differences among ESMs (WRF-CAM5, WRF-FINN, GEOS-Chem, EAM-E3SM, ALADIN, GEOS-FP, and MERRA-2) and aircraft-based measurements from the NASA ObseRvations of Aerosols above CLouds and their intEractionS (ORACLES) field campaign. Models frequently define the PBL as the well-mixed surface-based layer, but this definition misses the upper parts of decoupled PBLs, in which most low-level clouds occur. To account for the presence of decoupled boundary layers in the models, the height of maximum vertical gradient of specific humidity profiles from each model is used to define PBL heights. Results indicate that the monthly mean contribution of AOD in the FT to the total-column AOD ranges from 44 % to 74 % in September 2016 and from 54 % to 71 % in August 2017 within the region bounded by 25∘ S–0∘ N–S and 15∘ W–15∘ E (excluding land) among the ESMs. ALADIN and GEOS-Chem show similar aerosol plume patterns to a derived above-cloud aerosol product from the Moderate Resolution Imaging Spectroradiometer (MODIS) during September 2016, but none of the models show a similar above-cloud plume pattern to MODIS in August 2017. Using the second-generation High Spectral Resolution Lidar (HSRL-2) to derive an aircraft-based constraint on the AOD and the fractional AOD, we found that WRF-CAM5 produces 40 % less AOD than those from the HSRL-2 measurements, but it performs well at separating AOD fraction between the FT and the PBL. AOD fractions in the FT for GEOS-Chem and EAM-E3SM are, respectively, 10 % and 15 % lower than the AOD fractions from the HSRL-2. Their similar mean AODs reflect a cancellation of high and low AOD biases. Compared with aircraft-based observations, GEOS-FP, MERRA-2, and ALADIN produce 24 %–36 % less AOD and tend to misplace more aerosols in the PBL. The models generally underestimate AODs for measured AODs that are above 0.8, indicating their limitations at reproducing high AODs. The differences in the absolute AOD, FT AOD, and the vertical apportioning of AOD in different models highlight the need to continue improving the accuracy of modeled AOD distributions. These differences affect the sign and magnitude of the net aerosol radiative forcing, especially when aerosols are in contact with clouds.</p

    Using Convolutional Neural Networks for Cloud Detection on VEN&mu;S Images over Multiple Land-Cover Types

    No full text
    In most parts of the electromagnetic spectrum, solar radiation cannot penetrate clouds. Therefore, cloud detection and masking are essential in image preprocessing for observing the Earth and analyzing its properties. Because clouds vary in size, shape, and structure, an accurate algorithm is required for removing them from the area of interest. This task is usually more challenging over bright surfaces such as exposed sunny deserts or snow than over water bodies or vegetated surfaces. The overarching goal of the current study is to explore and compare the performance of three Convolutional Neural Network architectures (U-Net, SegNet, and DeepLab) for detecting clouds in the VEN&mu;S satellite images. To fulfil this goal, three VEN&mu;S tiles in Israel were selected. The tiles represent different land-use and cover categories, including vegetated, urban, agricultural, and arid areas, as well as water bodies, with a special focus on bright desert surfaces. Additionally, the study examines the effect of various channel inputs, exploring possibilities of broader usage of these architectures for different data sources. It was found that among the tested architectures, U-Net performs the best in most settings. Its results on a simple RGB-based dataset indicate its potential value for any satellite system screening, at least in the visible spectrum. It is concluded that all of the tested architectures outperform the current VEN&mu;S cloud-masking algorithm by lowering the false positive detection ratio by tens of percents, and should be considered an alternative by any user dealing with cloud-corrupted scenes

    Using Convolutional Neural Networks for Cloud Detection on VEN<i>μ</i>S Images over Multiple Land-Cover Types

    No full text
    In most parts of the electromagnetic spectrum, solar radiation cannot penetrate clouds. Therefore, cloud detection and masking are essential in image preprocessing for observing the Earth and analyzing its properties. Because clouds vary in size, shape, and structure, an accurate algorithm is required for removing them from the area of interest. This task is usually more challenging over bright surfaces such as exposed sunny deserts or snow than over water bodies or vegetated surfaces. The overarching goal of the current study is to explore and compare the performance of three Convolutional Neural Network architectures (U-Net, SegNet, and DeepLab) for detecting clouds in the VENμS satellite images. To fulfil this goal, three VENμS tiles in Israel were selected. The tiles represent different land-use and cover categories, including vegetated, urban, agricultural, and arid areas, as well as water bodies, with a special focus on bright desert surfaces. Additionally, the study examines the effect of various channel inputs, exploring possibilities of broader usage of these architectures for different data sources. It was found that among the tested architectures, U-Net performs the best in most settings. Its results on a simple RGB-based dataset indicate its potential value for any satellite system screening, at least in the visible spectrum. It is concluded that all of the tested architectures outperform the current VENμS cloud-masking algorithm by lowering the false positive detection ratio by tens of percents, and should be considered an alternative by any user dealing with cloud-corrupted scenes
    corecore