45 research outputs found

    Remote sensing educational opportunities available through WyomingView,

    Get PDF
    ABSTRACT WyomingView is one of twenty-seven members of the AmericaView program (www.americaview.org), funded by the USGS (americaview.usgs.gov). One of the goals of WyomingView is to promote remote sensing educational opportunities in Wyoming for its current and future workforce. Since its inception in 2002, WyomingView has worked with federal, state and local government agencies to identify remote sensing training needs. These activities have resulted in the development of several short courses aimed to benefit managers and remote sensing specialists in the Bureau of Land Management, Wyoming Game & Fish, and Wyoming State Engineers Office. These courses are offered in Laramie and other cities in Wyoming through our mobile teaching lab. Currently we are working to develop internet based courses to reduce travel time and cost, and we offer flexibility to agency personnel with our mobile teaching laboratory. Through seminars and other mechanisms, WyomingView has increased the breadth of remote sensing courses offered at the University of Wyoming. WyomingView has provided data, teaching materials and software to faculty members in various academic departments to promote remote sensing research and education

    THE EFFECT OF CONTRAST ENHANCEMENT ON EPIPHYTE SEGMENTATION USING GENERATIVE NETWORK

    Get PDF
    The performance of the deep learning-based image segmentation is highly dependent on two major factors as follows: 1) The organization and structure of the architecture used to train the model and 2) The quality of input data used to train the model. The input image quality and the variety of training samples are highly influencing the features derived by the deep learning filters for segmentation. This study focus on the effect of image quality of a natural dataset of epiphytes captured using Unmanned Aerial Vehicles (UAV), while segmenting the epiphytes from other background vegetation. The dataset used in this work is highly challenging in terms of pixel overlap between target and background to be segmented, the occupancy of target in the image and shadows from nearby vegetation. The proposed study used four different contrast enhancement techniques to improve the image quality of low contrast images from the epiphyte dataset. The enhanced dataset with four different methods were used to train five different segmentation models. The segmentation performances of four different models are reported using structural similarity index (SSIM) and intersection over union (IoU) score. The study shows that the epiphyte segmentation performance is highly influenced by the input image quality and recommendations are given based on four different techniques for experts to work with segmentation with natural datasets like epiphytes. The study also reported that the occupancy of the target epiphyte and vegetation highly influence the performance of the segmentation model

    MULTIPLE OIL PAD DETECTION USING DEEP LEARNING

    Get PDF
    Deep learning (DL) algorithms are widely used in object detection such as roads, vehicles, buildings, etc., in aerial images. However, the object detection task is still considered challenging for detecting complex structures, oil pads are one such example: due to its shape, orientation, and background reflection. A recent study used Faster Region-based Convolutional Neural Network (FR-CNN) to detect a single oil pad from the center of the image of size 256 × 256. However, for real-time applications, it is necessary to detect multiple oil pads from aerial images irrespective of their orientation. In this study, FR-CNN was trained to detect multiple oil pads. We cropped images from high spatial resolution images to train the model containing multiple oil pads. The network was trained for 100 epochs using 164 training images and tested with 50 images under 3 different categories. with images containing: single oil pad, multiple oil pad and no oil pad. The model performance was evaluated using standard metrics: precision, recall, F1-score. The final model trained for multiple oil pad detection achieved a weighted average for 50 images precision of 0.67, recall of 0.80, and f1 score of 0.73. The 0.80 recall score indicates that 80% of the oil pads were able to identify from the given test set. The presence of instances in test images like cleared areas, rock structures, and sand patterns having high visual similarity with the target resulted in a low precision score

    EFFECT OF DATA QUALITY ON WATER BODY SEGMENTATION WITH DEEPLABV3+ ALGORITHM

    Get PDF
    Training Deep Learning (DL) algorithms for segmenting features require hundreds to thousands of input data and corresponding labels. Generating thousands of input images and labels requires considerable resources and time. Hence, it is common practice to use opensource imagery data and labels available online. Most of these open-source data have little or no metadata describing their quality or suitability making it problematic for training or evaluating DL models. This study evaluated the effect of data quality on training DeepLabV3+, using Sentinel 2 A/B RGB images and labels obtained from Kaggle. We generated subsets of 256 × 256 pixels, and 10% of these images (802) were set aside for testing. First, we trained and validated the DeepLabV3+ model with the remaining images. Second, we removed images with incorrect labels and trained another DeepLabV3+ network. Finally, we trained the third DeepLabV3+ network after removing images with turbid water or with floating vegetation. All three trained models were evaluated with test images and then we calculated accuracy metrics. As the quality of the input images improved, accuracy of the predicted masks generated from the first model increased from 92.8% to 94.3% in the second model. The third model’s accuracy was 96.4%, demonstrating the network’s ability to better learn and predict water bodies when the input data had fewer class variations. Based on the results we recommend assessing the quality of open-source data for incorrect labels and variations in the target class prior to training DeepLabV3+ or any other DL network

    MAPPING NEWLY INUNDATED AREAS IN POST-FLOOD LANDSAT IMAGES USING THRESHOLDING TECHNIQUES

    Get PDF
    Identifying newly inundated areas following flood events is essential for planning rescue missions. These maps must be generated quickly as the spatial extent of the inundated areas might change during a single flood event. Several methods exist for generating such maps and several rely on one or more geospatial data to exclude existing waterbodies in an affected area. In this study, we tested a rapid flood mapping method that uses a pair of pre- and post-flood satellite images on seven sites throughout the US. We derived Normalized Difference Water Index (NDWI) and Modified NDWI (MNDWI) images from pre- and post-flood Landsat images and identified the optimal threshold values that highlighted newly inundated areas at these sites. The accuracy of the inundation maps was determined using manually interpreted verification data from the pairs of satellite images. Image analysts have identified the optimal threshold values between 25 and 40 minutes. Maps of newly inundated areas derived from differencing MNDWI and NDWI images had higher overall accuracy > 93%. Results obtained in this study confirms the utility of this rapid flood mapping technique to identify inundated areas using pre- and post-flood satellite images

    EFFECT OF HYPERPARAMETERS ON DEEPLABV3+ PERFORMANCE TO SEGMENT WATER BODIES IN RGB IMAGES

    Get PDF
    Deep Learning (DL) networks used in image segmentation tasks must be trained with input images and corresponding masks that identify target features in them. DL networks learn by iteratively adjusting the weights of interconnected layers using backpropagation, a process that involves calculating gradients and minimizing a loss function. This allows the network to learn patterns and relationships in the data, enabling it to make predictions or classifications on new, unseen data. Training any DL network requires specifying values of the hyperparameters such as input image size, batch size, and number of epochs among others. Failure to specify optimal values for the parameters will increase the training time or result in incomplete learning. The rationale of this study was to evaluate the effect of input image and batch sizes on the performance of DeepLabV3+ using Sentinel 2 A/B RGB images and labels obtained from Kaggle. We trained DeepLabV3+ network six times with two sets of input images of 128 × 128-pixel, and 256 × 256-pixel dimensions with 4, 8 and 16 batch sizes. The model is trained for 100 epochs to ensure that the loss plot reaches saturation and the model converged to a stable solution. Predicted masks generated by each model were compared to their corresponding test mask images based on accuracy, precision, recall and F1 scores. Results from this study demonstrated that image size of 256 × 256 and batch size 4 achieved highest performance. It can also be inferred that larger input image size improved DeepLabV3+ model performance

    INFLUENCE OF ADDITIONAL SPECTRAL BANDS FOR EPIPHYTE SEGMENTATION ON DRU-NET

    Get PDF
    Dense Residual U-Net (DRU-Net) is a neural network used for image segmentation. It is based on the U-Net architecture and isa combination of modified ResNet as the encoder and modified DenseNet as the decoder blocks. DRU-Net captures both the local and contextual information. Previous studies on DRU-Net have not tested the influence of the spectral resolution of the images. In an earlier study, the DRU-Net was trained with grayscale images for epiphyte segmentation. The network trained and tested with grayscale images underperformed while varying the illumination and occupancy of the target in the frame. In this study, the same network was trained and tested with RGB images for assessing the increase in overall learning. The performance of the network in segmenting epiphytes under conditions such as good/poor illumination and high/low target occupancy was analyzed. Dice and Jaccard scores were used as evaluation metrics. The DRU-Net model trained with RGB images had an improvement of 20% over the grayscale model in both average Dice and average Jaccard scores of the target class. Based on the higher Dice and Jaccard scores, adding additional spectral information improves DRU-Net learning. The increased computation time required for training DRU-Net with RGB images will result in better output. This model could be further used for identifying multiple epiphytes in images with poor illumination and different occupancy conditions
    corecore