1,630 research outputs found

    Vertical Artifacts in High-Resolution WorldView-2 and Worldview-3 Satellite Imagery of Aquatic Systems

    Get PDF
    Satellite image artefacts are features that appear in an image but not in the original imaged object and can negatively impact the interpretation of satellite data. Vertical artefacts are linear features oriented in the along-track direction of an image system and can present as either banding or striping; banding are features with a consistent width, and striping are features with inconsistent widths. This study used high-resolution data from DigitalGlobeʻs (now Maxar) WorldView-3 satellite collected at Lake Okeechobee, Florida (FL), on 30 August 2017. This study investigated the impact of vertical artefacts on both at-sensor radiance and a spectral index for an aquatic target as WorldView-3 was primarily designed as a land sensor. At-sensor radiance measured by six of WorldView-3ʻs eight spectral bands exhibited banding, more specifically referred to as non-uniformity, at a width corresponding to the multispectral detector sub-arrays that comprise the WorldView-3 focal plane. At-sensor radiance measured by the remaining two spectral bands, red and near-infrared (NIR) #1, exhibited striping. Striping in these spectral bands can be attributed to their time delay integration (TDI) settings at the time of image acquisition, which were optimized for land. The impact of vertical striping on a spectral index leveraging the red, red edge, and NIR spectral bands—referred to here as the NIR maximum chlorophyll index (MCINIR)—was investigated. Temporally similar imagery from the European Space Agencyʻs Sentinel-3 and Sentinel-2 satellites were used as baseline references of expected chlorophyll values across Lake Okeechobee as neither Sentinel-3 nor Sentinel-2 imagery showed striping. Striping was highly prominent in the MCINIR product generated using WorldView-3 imagery, as noise in the at-sensor radiance exceeded any signal of chlorophyll in the image. Adjusting the image acquisition parameters for future tasking of WorldView-3 or the functionally similar WorldView-2 satellite may alleviate these artefacts. To test this, an additional WorldView-3 image was acquired at Lake Okeechobee, FL, on 26 May 2021 in which the TDI settings and scan line rate were adjusted to improve the signal-to-noise ratio. While some evidence of non-uniformity remained, striping was no longer noticeable in the MCINIR product. Future image tasking over aquatic targets should employ these updated image acquisition parameters. Since the red and NIR #1 spectral bands are critical for inland and coastal water applications, archived images not collected using these updated settings may be limited in their potential for analysis of aquatic variables that require these two spectral bands to derive

    Automatic quantitative morphological analysis of interacting galaxies

    Full text link
    The large number of galaxies imaged by digital sky surveys reinforces the need for computational methods for analyzing galaxy morphology. While the morphology of most galaxies can be associated with a stage on the Hubble sequence, morphology of galaxy mergers is far more complex due to the combination of two or more galaxies with different morphologies and the interaction between them. Here we propose a computational method based on unsupervised machine learning that can quantitatively analyze morphologies of galaxy mergers and associate galaxies by their morphology. The method works by first generating multiple synthetic galaxy models for each galaxy merger, and then extracting a large set of numerical image content descriptors for each galaxy model. These numbers are weighted using Fisher discriminant scores, and then the similarities between the galaxy mergers are deduced using a variation of Weighted Nearest Neighbor analysis such that the Fisher scores are used as weights. The similarities between the galaxy mergers are visualized using phylogenies to provide a graph that reflects the morphological similarities between the different galaxy mergers, and thus quantitatively profile the morphology of galaxy mergers.Comment: Astronomy & Computing, accepte

    Compression of Infrared images

    Get PDF

    Using Drones to Evaluate Revegetation Success on Natural Gas Pipelines

    Get PDF
    The Appalachian region of the United States has significant growth in the production of natural gas. Developing the infrastructure required for this resource creates significant disturbances across the landscape, as both well pads and transportation pipelines must be created in this mountainous terrain. Midstream infrastructure, which includes pipeline rights-of-way and associated infrastructure, can cause significant environmental degradation, especially in the form of sedimentation. The introduction of this non-point source pollutant can be detrimental to freshwater ecosystems found throughout this region. This ecological risk has necessitated the enactment of regulations related to midstream infrastructure development. Weekly, inspectors travel afoot along pipeline rights-of-way, monitoring the reestablishment of surface vegetation and identifying failing areas for future management. The topographically challenging terrain of West Virginia makes these inspections difficult and dangerous to the hiking inspectors. We evaluated the accuracy at which unmanned aerial vehicles replicated inspector classifications to evaluate their use as a complementary tool in the pipeline inspection process. Both RGB and multispectral sensor collections were performed, and a support vector machine classification model predicting vegetation cover were made for each dataset. Using inspector defined validation plots, our research found comparable high accuracy between the two collection sensors. This technique appears to be capable of augmenting the current inspection process, though it is likely that the model can be improved to help lower overall costs. The high accuracy thus obtained suggests valuable implementation of this widely available technology in aiding these challenging inspections

    A Compact Blimp Remote Sensing System for Analysis of Environmental Change

    Get PDF
    The goal of this research is to demonstrate that developing an inexpensive compact blimp remote sensing system will allow for analysis and change detection of environmental characteristics over a small area. This system has scale and temporal advantages over existing aircraft and space platforms. The blimp remote sensing platform is capable of temporal resolutions of hours or days, while generating low cost, high spatial resolution images. This contrasts with satellite and fixed wing aircraft systems that are costly to purchase and operate, have low temporal resolutions, and may not provide adequate spatial resolution to investigate small scale phenomena. As designed, the system consists of a 15ft long tethered blimp with an analog camera package attached to the keel of the blimp. The blimp holds 300 cu ft of helium and has a lift capacity of 9 lbs at sea level. A blimp design was chosen over a round balloon design because of the blimp’s increased stability in windy conditions. Because of the constraints of infra-red film the spectral response is limited to the green, red, and near infra-red parts of the electro-magnetic spectrum. As a proof of concept, images were gathered during three flights over a restored prairie site. The blimp system was able to gather low cost, high resolution images, while loitering over the site. Operational costs were much lower than fixed wing aircraft or satellite systems and mission tasking times were on the order of hours rather than days, weeks or months. The most significant feature of the blimp system was its ability to gather imaging data in heavy overcast conditions that would have precluded fixed wing aircraft missions and obscured satellite imaging

    Multispectral Method for Apple Defect Detection using Hyperspectral Imaging System

    Get PDF
    Hyperspectral imaging is a non-destructive detection technology and a powerful analytical tool that integrates conventional imaging and spectroscopy to get both spatial and spectral information from the objects for food safety and quality analysis. A recently developed hyperspectral imaging system was used to investigate the wavelength between 530nm and 835nm to detect defects on Red Delicious apples. The combination of band ratio method and relative intensity method were developed in this paper, which using the multispectral wavebands selected from hyperspectral images. The results showed that the hyperspectral imaging system with the properly developed multispectral method could generally identify 95% of the defects on apple surface accurately. The developed algorithms could help enhance food safety and protect public health while reducing human error and labor cost for food industr

    Self-supervised learning methods and applications in medical imaging analysis: A survey

    Full text link
    The scarcity of high-quality annotated medical imaging datasets is a major problem that collides with machine learning applications in the field of medical imaging analysis and impedes its advancement. Self-supervised learning is a recent training paradigm that enables learning robust representations without the need for human annotation which can be considered an effective solution for the scarcity of annotated medical data. This article reviews the state-of-the-art research directions in self-supervised learning approaches for image data with a concentration on their applications in the field of medical imaging analysis. The article covers a set of the most recent self-supervised learning methods from the computer vision field as they are applicable to the medical imaging analysis and categorize them as predictive, generative, and contrastive approaches. Moreover, the article covers 40 of the most recent research papers in the field of self-supervised learning in medical imaging analysis aiming at shedding the light on the recent innovation in the field. Finally, the article concludes with possible future research directions in the field

    Just-in-time Pastureland Trait Estimation for Silage Optimization, under Limited Data Constraints

    Get PDF
    To ensure that pasture-based farming meets production and environmental targets for a growing population under increasing resource constraints, producers need to know pastureland traits. Current proximal pastureland trait prediction methods largely rely on vegetation indices to determine biomass and moisture content. The development of new techniques relies on the challenging task of collecting labelled pastureland data, leading to small datasets. Classical computer vision has already been applied to weed identification and recognition of fruit blemishes using morphological features, but machine learning algorithms can parameterise models without the provision of explicit features, and deep learning can extract even more abstract knowledge although typically this is assumed to be based around very large datasets. This work hypothesises that through the advantages of state-of-the-art deep learning systems, pastureland crop traits can be accurately assessed in a just-in-time fashion, based on data retrieved from an inexpensive sensor platform, under the constraint of limited amounts of labelled data. However the challenges to achieve this overall goal are great, and for applications such as just-in-time yield and moisture estimation for farm-machinery, this work must bring together systems development, knowledge of good pastureland practice, and also techniques for handling low-volume datasets in a machine learning context. Given these challenges, this thesis makes a number of contributions. The first of these is a comprehensive literature review, relating pastureland traits to ruminant nutrient requirements and exploring trait estimation methods, from contact to remote sensing methods, including details of vegetation indices and the sensors and techniques required to use them. The second major contribution is a high-level specification of a platform for collecting and labelling pastureland data. This includes the collection of four-channel Blue, Green, Red and NIR (VISNIR) images, narrowband data, height and temperature differential, using inexpensive proximal sensors and provides a basis for holistic data analysis. Physical data platforms built around this specification were created to collect and label pastureland data, involving computer scientists, agricultural, mechanical and electronic engineers, and biologists from academia and industry, working with farmers. Using the developed platform and a set of protocols for data collection, a further contribution of this work was the collection of a multi-sensor multimodal dataset for pastureland properties. This was made up of four-channel image data, height data, thermal data, Global Positioning System (GPS) and hyperspectral data, and is available and labelled with biomass (Kg/Ha) and percentage dry matter, ready for use in deep learning. However, the most notable contribution of this work was a systematic investigation of various machine learning methods applied to the collected data in order to maximise model performance under the constraints indicated above. The initial set of models focused on collected hyperspectral datasets. However, due to their relative complexity in real-time deployment, the focus was instead on models that could best leverage image data. The main body of these models centred on image processing methods and, in particular, the use of the so-called Inception Resnet and MobileNet models to predict fresh biomass and percentage dry matter, enhancing performance using data fusion, transfer learning and multi-task learning. Images were subdivided to augment the dataset, using two different patch sizes, resulting in around 10,000 small patches of size 156 x 156 pixels and around 5,000 large patches of size 240 x 240 pixels. Five-fold cross validation was used in all analysis. Prediction accuracy was compared to older mechanisms, albeit using hyperspectral data collected, with no provision made for lighting, humidity or temperature. Hyperspectral labelled data did not produce accurate results when used to calculate Normalized Difference Vegetation Index (NDVI), or to train a neural network (NN), a 1D Convolutional Neural Network (CNN) or Long Short Term Memory (LSTM) models. Potential reasons for this are discussed, including issues around the use of highly sensitive devices in uncontrolled environments. The most accurate prediction came from a multi-modal hybrid model that concatenated output from an Inception ResNet based model, run on RGB data with ImageNet pre-trained RGB weights, output from a residual network trained on NIR data, and LiDAR height data, before fully connected layers, using the small patch dataset with a minimum validation MAPE of 28.23% for fresh biomass and 11.43% for dryness. However, a very similar prediction accuracy resulted from a model that omitted NIR data, thus requiring fewer sensors and training resources, making it more sustainable. Although NIR and temperature differential data were collected and used for analysis, neither improved prediction accuracy, with the Inception ResNet model’s minimum validation MAPE rising to 39.42% when NIR data was added. When both NIR data and temperature differential were added to a multi-task learning Inception ResNet model, it yielded a minimum validation MAPE of 33.32%. As more labelled data are collected, the models can be further trained, enabling sensors on mowers to collect data and give timely trait information to farmers. This technology is also transferable to other crops. Overall, this work should provide a valuable contribution to the smart agriculture research space
    • …
    corecore