31 research outputs found
NeMO-Net The Neural Multi-Modal Observation & Training Network for Global Coral Reef Assessment
We present NeMO-Net, the Srst open-source deep convolutional neural network (CNN) and interactive learning and training software aimed at assessing the present and past dynamics of coral reef ecosystems through habitat mapping into 10 biological and physical classes. Shallow marine systems, particularly coral reefs, are under significant pressures due to climate change, ocean acidification, and other anthropogenic pressures, leading to rapid, often devastating changes, in these fragile and diverse ecosystems. Historically, remote sensing of shallow marine habitats has been limited to meter-scale imagery due to the optical effects of ocean wave distortion, refraction, and optical attenuation. NeMO-Net combines 3D cm-scale distortion-free imagery captured using NASA FluidCam and Fluid lensing remote sensing technology with low resolution airborne and spaceborne datasets of varying spatial resolutions, spectral spaces, calibrations, and temporal cadence in a supercomputer-based machine learning framework. NeMO-Net augments and improves the benthic habitat classification accuracy of low-resolution datasets across large geographic ad temporal scales using high-resolution training data from FluidCam.NeMO-Net uses fully convolutional networks based upon ResNet and ReSneNet to perform semantic segmentation of remote sensing imagery of shallow marine systems captured by drones, aircraft, and satellites, including WorldView and Sentinel. Deep Laplacian Pyramid Super-Resolution Networks (LapSRN) alongside Domain Adversarial Neural Networks (DANNs) are used to reconstruct high resolution information from low resolution imagery, and to recognize domain-invariant features across datasets from multiple platforms to achieve high classification accuracies, overcoming inter-sensor spatial, spectral and temporal variations.Finally, we share our online active learning and citizen science platform, which allows users to provide interactive training data for NeMO-Net in 2D and 3D, integrated within a deep learning framework. We present results from the PaciSc Islands including Fiji, Guam and Peros Banhos 1 1 2 1 3 1 where 24-class classification accuracy exceeds 91%
NeMO-Net - The Neural Multi-Modal Observation & Training Network for Global Coral Reef Assessment
In the past decade, coral reefs worldwide have experienced unprecedented stresses due to climate change, ocean acidification, and anthropomorphic pressures, instigating massive bleaching and die-off of these fragile and diverse ecosystems. Furthermore, remote sensing of these shallow marine habitats is hindered by ocean wave distortion, refraction and optical attenuation, leading invariably to data products that are often of low resolution and signal-to-noise (SNR) ratio. However, recent advances in UAV and Fluid Lensing technology have allowed us to capture multispectral 3D imagery of these systems at sub-cm scales from above the water surface, giving us an unprecedented view of their growth and decay. By combining spatial and spectral information from varying resolutions, we seek to augment and improve the classification accuracy of previously low-resolution datasets at large temporal scales.NeMO-Net, the first open-source deep convolutional neural network (CNN) and interactive learning and training software, currently being developed at NASA Ames, is aimed at assessing the present and past dynamics of coral reef ecosystems through determination of percent living cover and morphology. The latest iteration uses fully convolutional networks to segment and identify coral imagery taken by UAVs and satellites, including WorldView-2 and Sentinel. We present results taken from the Indian Ocean where classification accuracy has exceeded 91% for 24 geomorphological classes given ample training data. In addition, we utilize deep Laplacian Pyramid Super-Resolution Networks (LapSRN) to reconstruct high resolution information from low resolution imagery, trained from various UAV and satellite datasets. Finally, in the case of insufficient training data, we have developed an interactive online platform that allows users to easily segment and submit their classifications, which has been integrated with the current NeMO-Net workflow. Specifically, we present results from the Fiji islands in which preliminary user data has allowed for the accurate identification of 9 separate classes, despite issues such as cloud shadowing and spectral variation. The project is being supported by NASA's Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST-16) Program
Ultra-Stable Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (5STAR)
The Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) combines airborne sun tracking and sky scanning with diffraction spectroscopy to improve knowledge of atmospheric constituents and their links to airpollution and climate. Direct beam hyperspectral measurement of optical depth improves retrievals of gas constituentsand determination of aerosol properties. Sky scanning enhances retrievals of aerosol type and size distribution.Hyperspectral cloud-transmitted radiance measurements enable the retrieval of cloud properties from below clouds.These measurements tighten the closure between satellite and ground-based measurements. 4STAR incorporates amodular sun-tracking sky-scanning optical head with optical fiber signal transmission to rack mounted spectrometers,permitting miniaturization of the external optical tracking head, and future detector evolution.4STAR has supported a broad range of flight experiments since it was first flown in 2010. This experience provides thebasis for a series of improvements directed toward reducing measurement uncertainty and calibration complexity, andexpanding future measurement capabilities, to be incorporated into a new 5STAR instrument. A 9-channel photodioderadiometer with AERONET-matched bandpass filters will be incorporated to improve calibration stability. A wide dynamic range tracking camera will provide a high precision solar position tracking signal as well as an image of sky conditions around the solar axis. An ultrasonic window cleaning system design will be tested. A UV spectrometer tailored for formaldehyde and SO2 gas retrievals will be added to the spectrometer enclosure. Finally, expansion capability for a 4 channel polarized radiometer to measure the Stokes polarization vector of sky light will be incorporated. This paper presents initial progress on this next-generation 5STAR instrument
Recommended from our members
The effect of low-level thin arctic clouds on shortwave irradiance: evaluation of estimates from spaceborne passive imagery with aircraft observations
Cloud optical properties such as optical thickness along with surface albedo are important inputs for deriving the shortwave radiative effects of clouds from spaceborne remote sensing. Owing to insufficient knowledge about the snow or ice surface in the Arctic, cloud detection and the retrieval products derived from passive remote sensing, such as from the Moderate Resolution Imaging Spectroradiometer (MODIS), are difficult to obtain with adequate accuracy – especially for low-level thin clouds, which are ubiquitous in the Arctic. This study aims at evaluating the spectral and broadband irradiance calculated from MODIS-derived cloud properties in the Arctic using aircraft measurements collected during the Arctic Radiation-IceBridge Sea and Ice Experiment (ARISE), specifically using the upwelling and downwelling shortwave spectral and broadband irradiance measured by the Solar Spectral Flux Radiometer (SSFR) and the BroadBand Radiometer system (BBR). This starts with the derivation of surface albedo from SSFR and BBR, accounting for the heterogeneous surface in the marginal ice zone (MIZ) with aircraft camera imagery, followed by subsequent intercomparisons of irradiance measurements and radiative transfer calculations in the presence of thin clouds. It ends with an attribution of any biases we found to causes, based on the spectral dependence and the variations in the measured and calculated irradiance along the flight track.
The spectral surface albedo derived from the airborne radiometers is consistent with prior ground-based and airborne measurements and adequately represents the surface variability for the study region and time period. Somewhat surprisingly, the primary error in MODIS-derived irradiance fields for this study stems from undetected clouds, rather than from the retrieved cloud properties. In our case study, about 27 % of clouds remained undetected, which is attributable to clouds with an optical thickness of less than 0.5.
We conclude that passive imagery has the potential to accurately predict shortwave irradiances in the region if the detection of thin clouds is improved. Of at least equal importance, however, is the need for an operational imagery-based surface albedo product for the polar regions that adequately captures its temporal, spatial, and spectral variability to estimate cloud radiative effects from spaceborne remote sensing.
</p
Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) Instrument Improvements
The Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) combines airborne sun tracking and sky scanning with grating spectroscopy to improve knowledge of atmospheric constituents and their links to air-pollution and climate. Hyper-spectral measurements of direct-beam solar irradiance provide retrievals of gas constituents, aerosol optical depth, and aerosol and thin cloud optical properties. Sky radiance measurements in the principal and almucantar planes enhance retrievals of aerosol absorption, aerosol type, and size mode distribution. Zenith radiance measurements are used to retrieve cloud properties and phase, which in turn are used to quantify the radiative transfer below cloud layers. These airborne measurements tighten the closure between satellite and ground-based measurements. In contrast to the Ames Airborne Tracking Sunphotometer (AATS-14) predecessor instrument, new technologies for each subsystem have been incorporated into 4STAR. In particular, 4STAR utilizes a modular sun-trackingsky-scanning optical head with fiber optic signal transmission to rack mounted spectrometers, permitting miniaturization of the external optical head, and spectrometerdetector configurations that may be tailored for specific scientific objectives. This paper discusses technical challenges relating to compact optical collector design, radiometric dynamic range and stability, and broad spectral coverage at high resolution. Test results benchmarking the performance of the instrument against the AATS-14 standard and emerging science requirements are presented
Recommended from our members
Empirically derived parameterizations of the direct aerosol radiative effect based on ORACLES aircraft observations
In this paper, we use observations from the NASA ORACLES (ObseRvations of CLouds above Aerosols and their intEractionS) aircraft campaign to develop a framework by way of two parameterizations that establishes regionally representative relationships between aerosol-cloud properties and their radiative effects. These relationships rely on new spectral aerosol property retrievals of the single scattering albedo (SSA) and asymmetry parameter (ASY). The retrievals capture the natural variability of the study region as sampled, and both were found to be fairly narrowly constrained (SSA: 0.83 ± 0.03 in the mid-visible, 532 nm; ASY: 0.54 ± 0.06 at 532 nm). The spectral retrievals are well suited for calculating the direct aerosol radiative effect (DARE) since SSA and ASY are tied directly to the irradiance measured in the presence of aerosols – one of the inputs to the spectral DARE.
The framework allows for entire campaigns to be generalized into a set of parameterizations. For a range of solar zenith angles, it links the broadband DARE to the mid-visible aerosol optical depth (AOD) and the albedo (α) of the underlying scene (either clouds or clear sky) by way of the first parameterization: P(AOD, α). For ORACLES, the majority of the case-to-case variability of the broadband DARE is attributable to the dependence on the two driving parameters of P(AOD, α). A second, extended, parameterization PX(AOD, α, SSA) explains even more of the case-to-case variability by introducing the mid-visible SSA as a third parameter. These parameterizations establish a direct link from two or three mid-visible (narrowband) parameters to the broadband DARE, implicitly accounting for the underlying spectral dependencies of its drivers. They circumvent some of the assumptions when calculating DARE from satellite products or in a modeling context. For example, the DARE dependence on aerosol microphysical properties is not explicit in P or PX because the asymmetry parameter varies too little from case to case to translate into appreciable DARE variability. While these particular DARE parameterizations only represent the ORACLES data, they raise the prospect of generalizing the framework to other regions.
</div
Recommended from our members
Above-cloud aerosol radiative effects based on ORACLES 2016 and ORACLES 2017 aircraft experiments
Determining the direct aerosol radiative effect (DARE) of absorbing aerosols above clouds from satellite observations alone is a challenging task, in part because the radiative signal of the aerosol layer is not easily untangled from that of the clouds below. In this study, we use aircraft measurements from the NASA ObseRvations of CLouds above Aerosols and their intEractionS (ORACLES) project in the southeastern Atlantic to derive it with as few assumptions as possible. This is accomplished by using spectral irradiance measurements (Solar Spectral Flux Radiometer, SSFR) and aerosol optical depth (AOD) retrievals (Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research, 4STAR) during vertical profiles (spirals) that minimize the albedo variability of the underlying cloud field – thus isolating aerosol radiative effects from those of the cloud field below. For two representative cases, we retrieve spectral aerosol single scattering albedo (SSA) and the asymmetry parameter (g) from these profile measurements and calculate DARE given the albedo range measured by SSFR on horizontal legs above clouds. For mid-visible wavelengths, we find SSA values from 0.80 to 0.85 and a significant spectral dependence of g. As the cloud albedo increases, the aerosol increasingly warms the column. The transition from a cooling to a warming top-of-aerosol radiative effect occurs at an albedo value (critical albedo) just above 0.2 in the mid-visible wavelength range. In a companion paper, we use the techniques introduced here to generalize our findings to all 2016 and 2017 measurements and parameterize aerosol radiative effects.</p
On the differences in the vertical distribution of modeled aerosol optical depth over the southeastern Atlantic
The southeastern Atlantic is home to an expansive smoke aerosol plume overlying a large cloud deck for approximately a third of the year. The aerosol plume is mainly attributed to the extensive biomass burning activities that occur in southern Africa. Current Earth system models (ESMs) reveal significant differences in their estimates of regional aerosol radiative effects over this region. Such large differences partially stem from uncertainties in the vertical distribution of aerosols in the troposphere. These uncertainties translate into different aerosol optical depths (AODs) in the planetary boundary layer (PBL) and the free troposphere (FT). This study examines differences of AOD fraction in the FT and AOD differences among ESMs (WRF-CAM5, WRF-FINN, GEOS-Chem, EAM-E3SM, ALADIN, GEOS-FP, and MERRA-2) and aircraft-based measurements from the NASA ObseRvations of Aerosols above CLouds and their intEractionS (ORACLES) field campaign. Models frequently define the PBL as the well-mixed surface-based layer, but this definition misses the upper parts of decoupled PBLs, in which most low-level clouds occur. To account for the presence of decoupled boundary layers in the models, the height of maximum vertical gradient of specific humidity profiles from each model is used to define PBL heights. Results indicate that the monthly mean contribution of AOD in the FT to the total-column AOD ranges from 44 % to 74 % in September 2016 and from 54 % to 71 % in August 2017 within the region bounded by 25∘ S–0∘ N–S and 15∘ W–15∘ E (excluding land) among the ESMs. ALADIN and GEOS-Chem show similar aerosol plume patterns to a derived above-cloud aerosol product from the Moderate Resolution Imaging Spectroradiometer (MODIS) during September 2016, but none of the models show a similar above-cloud plume pattern to MODIS in August 2017. Using the second-generation High Spectral Resolution Lidar (HSRL-2) to derive an aircraft-based constraint on the AOD and the fractional AOD, we found that WRF-CAM5 produces 40 % less AOD than those from the HSRL-2 measurements, but it performs well at separating AOD fraction between the FT and the PBL. AOD fractions in the FT for GEOS-Chem and EAM-E3SM are, respectively, 10 % and 15 % lower than the AOD fractions from the HSRL-2. Their similar mean AODs reflect a cancellation of high and low AOD biases. Compared with aircraft-based observations, GEOS-FP, MERRA-2, and ALADIN produce 24 %–36 % less AOD and tend to misplace more aerosols in the PBL. The models generally underestimate AODs for measured AODs that are above 0.8, indicating their limitations at reproducing high AODs. The differences in the absolute AOD, FT AOD, and the vertical apportioning of AOD in different models highlight the need to continue improving the accuracy of modeled AOD distributions. These differences affect the sign and magnitude of the net aerosol radiative forcing, especially when aerosols are in contact with clouds.</p
Using Convolutional Neural Networks for Cloud Detection on VENμS Images over Multiple Land-Cover Types
In most parts of the electromagnetic spectrum, solar radiation cannot penetrate clouds. Therefore, cloud detection and masking are essential in image preprocessing for observing the Earth and analyzing its properties. Because clouds vary in size, shape, and structure, an accurate algorithm is required for removing them from the area of interest. This task is usually more challenging over bright surfaces such as exposed sunny deserts or snow than over water bodies or vegetated surfaces. The overarching goal of the current study is to explore and compare the performance of three Convolutional Neural Network architectures (U-Net, SegNet, and DeepLab) for detecting clouds in the VENμS satellite images. To fulfil this goal, three VENμS tiles in Israel were selected. The tiles represent different land-use and cover categories, including vegetated, urban, agricultural, and arid areas, as well as water bodies, with a special focus on bright desert surfaces. Additionally, the study examines the effect of various channel inputs, exploring possibilities of broader usage of these architectures for different data sources. It was found that among the tested architectures, U-Net performs the best in most settings. Its results on a simple RGB-based dataset indicate its potential value for any satellite system screening, at least in the visible spectrum. It is concluded that all of the tested architectures outperform the current VENμS cloud-masking algorithm by lowering the false positive detection ratio by tens of percents, and should be considered an alternative by any user dealing with cloud-corrupted scenes
Using Convolutional Neural Networks for Cloud Detection on VEN<i>μ</i>S Images over Multiple Land-Cover Types
In most parts of the electromagnetic spectrum, solar radiation cannot penetrate clouds. Therefore, cloud detection and masking are essential in image preprocessing for observing the Earth and analyzing its properties. Because clouds vary in size, shape, and structure, an accurate algorithm is required for removing them from the area of interest. This task is usually more challenging over bright surfaces such as exposed sunny deserts or snow than over water bodies or vegetated surfaces. The overarching goal of the current study is to explore and compare the performance of three Convolutional Neural Network architectures (U-Net, SegNet, and DeepLab) for detecting clouds in the VENμS satellite images. To fulfil this goal, three VENμS tiles in Israel were selected. The tiles represent different land-use and cover categories, including vegetated, urban, agricultural, and arid areas, as well as water bodies, with a special focus on bright desert surfaces. Additionally, the study examines the effect of various channel inputs, exploring possibilities of broader usage of these architectures for different data sources. It was found that among the tested architectures, U-Net performs the best in most settings. Its results on a simple RGB-based dataset indicate its potential value for any satellite system screening, at least in the visible spectrum. It is concluded that all of the tested architectures outperform the current VENμS cloud-masking algorithm by lowering the false positive detection ratio by tens of percents, and should be considered an alternative by any user dealing with cloud-corrupted scenes