16,975 research outputs found

    Aerial imagery for yield prediction

    Get PDF
    UAVs enable fast, high resolution image capture of cotton fields. These images are typically assessed manually to identify areas of stress or reduced productivity. However, these assessments are not currently linked directly with on-farm management decisions. NCEA has developed software that determines yield prediction and irrigation requirements from: (i) UAV images; (ii) automated image analysis that extract cotton growth rates; and (iii) biophysical cotton model. CottonInfo extension officers and agronomists collected imagery in three regions in the 2016/17 and 2017/18 cotton seasons. Yield predictions from the evaluations in the 2016/17 season were within 5% of the final yield

    Quantifying the effect of aerial imagery resolution in automated hydromorphological river characterisation

    Get PDF
    Existing regulatory frameworks aiming to improve the quality of rivers place hydromorphology as a key factor in the assessment of hydrology, morphology and river continuity. The majority of available methods for hydromorphological characterisation rely on the identification of homogeneous areas (i.e., features) of flow, vegetation and substrate. For that purpose, aerial imagery is used to identify existing features through either visual observation or automated classification techniques. There is evidence to believe that the success in feature identification relies on the resolution of the imagery used. However, little effort has yet been made to quantify the uncertainty in feature identification associated with the resolution of the aerial imagery. This paper contributes to address this gap in knowledge by contrasting results in automated hydromorphological feature identification from unmanned aerial vehicles (UAV) aerial imagery captured at three resolutions (2.5 cm, 5 cm and 10 cm) along a 1.4 km river reach. The results show that resolution plays a key role in the accuracy and variety of features identified, with larger identification errors observed for riffles and side bars. This in turn has an impact on the ecological characterisation of the river reach. The research shows that UAV technology could be essential for unbiased hydromorphological assessment

    Bootstrapped CNNs for Building Segmentation on RGB-D Aerial Imagery

    Get PDF
    Detection of buildings and other objects from aerial images has various applications in urban planning and map making. Automated building detection from aerial imagery is a challenging task, as it is prone to varying lighting conditions, shadows and occlusions. Convolutional Neural Networks (CNNs) are robust against some of these variations, although they fail to distinguish easy and difficult examples. We train a detection algorithm from RGB-D images to obtain a segmented mask by using the CNN architecture DenseNet.First, we improve the performance of the model by applying a statistical re-sampling technique called Bootstrapping and demonstrate that more informative examples are retained. Second, the proposed method outperforms the non-bootstrapped version by utilizing only one-sixth of the original training data and it obtains a precision-recall break-even of 95.10% on our aerial imagery dataset.Comment: Published at ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Science

    Guide to aerial imagery of Michigan

    Get PDF
    There are no author-identified significant results in this report

    Using Moored Arrays and Hyperspectral Aerial Imagery to Develop Nutrient Criteria for New Hampshire\u27s Estuaries

    Get PDF
    Increasing nitrogen concentrations and declining eelgrass beds in Great Bay, NH are clear indicators of impending problems for the state’s estuaries. A workgroup established in 2005 by the NH Department of Environmental Services and the NH Estuaries Project (NHEP) adopted eelgrass survival as the water quality target for nutrient criteria development for NH’s estuaries. In 2007, the NHEP received a grant from the U.S. Environmental Protection Agency to collect water quality information including that from moored sensors and hyper-spectral imagery data of the Great Bay Estuary. Data from the Great Bay Coastal Buoy, part of the regional Integrated Ocean Observing System (IOOS), were used to derive a multivariate model of water clarity with phytoplankton, Colored Dissolved Organic Matter (CDOM), and non-algal particles. Non-algal particles include both inorganic and organic matter. Most of the temporal variability in the diffuse attenuation coefficient of Photosynthetically Available Radiation (PAR) was associated with non-algal particles. However, on a mean daily basis non-algal particles and CDOM contributed a similar fraction (~30 %) to the attenuation of light. The contribution of phytoplankton was about a third of the other two optically important constituents. CDOM concentrations varied with salinity and magnitude of riverine inputs demonstrating its terrestrial origin. Non-algal particle concentration also varied with river flow but also wind driven resuspension. Twelve of the NHEP estuarine assessment zones were observed with the hyperspectral aerial imagery on August 29 and October 17. A concurrent in situ effort included buoy measurements, continuous along-track sampling, discrete water grab samples, and vertical profiles of light attenuation. PAR effective attenuation coefficients retrieved from deep water regions in the imagery agreed well with in-situ observations. Water clarity was lower and optically important constituent concentrations were higher in the tributaries. Eelgrass survival depth, estimated as the depth at which 22% of surface light was available, ranged from less than half a meter to over two meters. The best water clarity was found in the Great Bay (GB), Little Bay (LB), and Lower Piscataqua River (LPR) assessment zones. Absence of eelgrass from these zones would indicate controlling factors other than water clarity

    Using high resolution optical imagery to detect earthquake-induced liquefaction: the 2011 Christchurch earthquake

    Get PDF
    Using automated supervised methods with satellite and aerial imageries for liquefaction mapping is a promising step in providing detailed and region-scale maps of liquefaction extent immediately after an earthquake. The accuracy of these methods depends on the quantity and quality of training samples and the number of available spectral bands. Digitizing a large number of high-quality training samples from an event may not be feasible in the desired timeframe for rapid response as the training pixels for each class should be typical and accurately represent the spectral diversity of that specific class. To perform automated classification for liquefaction detection, we need to understand how to build the optimal and accurate training dataset. Using multispectral optical imagery from the 22 February, 2011 Christchurch earthquake, we investigate the effects of quantity of high-quality training pixel samples as well as the number of spectral bands on the performance of a pixel-based parametric supervised maximum likelihood classifier for liquefaction detection. We find that the liquefaction surface effects are bimodal in terms of spectral signature and therefore, should be classified as either wet liquefaction or dry liquefaction. This is due to the difference in water content between these two modes. Using 5-fold cross-validation method, we evaluate performance of the classifier on datasets with different pixel sizes of 50, 100, 500, 2000, and 4000. Also, the effect of adding spectral information was investigated by adding once only the near infrared (NIR) band to the visible red, green, and blue (RGB) bands and the other time using all available 8 spectral bands of the World-View 2 satellite imagery. We find that the classifier has high accuracies (75%–95%) when using the 2000 pixels-size dataset that includes the RGB+NIR spectral bands and therefore, increasing to 4000 pixels-size dataset and/or eight spectral bands may not be worth the required time and cost. We also investigate accuracies of the classifier when using aerial imagery with same number of training pixels and either RGB or RGB+NIR bands and find that the classifier accuracies are higher when using satellite imagery with same number of training pixels and spectral information. The classifier identifies dry liquefaction with higher user accuracy than wet liquefaction across all evaluated scenarios. To improve classification performance for wet liquefaction detection, we also investigate adding geospatial information of building footprints to improve classification performance. We find that using a building footprint mask to remove them from the classification process, increases wet liquefaction user accuracy by roughly 10%.Published versio
    • …
    corecore