7 research outputs found

    The Detection of Forest Structures in the Monongahela National Forest Using LiDAR

    Get PDF
    The mapping of structural elements of a forest is important for forestry management to provide a baseline for old and new-growth trees while providing height strata for a stand. These activities are important for the overall monitoring process which aids in the understanding of anthropogenic and natural disturbances. Height information recorded for each discrete point is key for the creation of canopy height, canopy surface, and canopy cover models. The aim of this study is to assess if LiDAR can be used to determine forest structures. Small footprint, leaf-off LiDAR data were obtained for the Monongahela National Forest, West Virginia. This dataset was compared to Landsat imagery acquired for the same area. Each dataset endured supervised classifications and object oriented segmentation with random forest classifications. These approaches took into account derived variables such as, percentages of canopy height, canopy cover, stem density, and normalized difference vegetation index, which were converted from the original datasets. Evaluation of the study depicted that the classification of the Landsat data produced results ranging between 31.3 and 50.2%, whilst the LiDAR dataset produced accuracies ranging from 54.7 to 80.1%. The results of this study increase the potential of LiDAR to be used regularly as a forestry management technique and warrant future research

    Toward Global Localization of Unmanned Aircraft Systems using Overhead Image Registration with Deep Learning Convolutional Neural Networks

    Get PDF
    Global localization, in which an unmanned aircraft system (UAS) estimates its unknown current location without access to its take-off location or other locational data from its flight path, is a challenging problem. This research brings together aspects from the remote sensing, geoinformatics, and machine learning disciplines by framing the global localization problem as a geospatial image registration problem in which overhead aerial and satellite imagery serve as a proxy for UAS imagery. A literature review is conducted covering the use of deep learning convolutional neural networks (DLCNN) with global localization and other related geospatial imagery applications. Differences between geospatial imagery taken from the overhead perspective and terrestrial imagery are discussed, as well as difficulties in using geospatial overhead imagery for image registration due to a lack of suitable machine learning datasets. Geospatial analysis is conducted to identify suitable areas for future UAS imagery collection. One of these areas, Jerusalem northeast (JNE) is selected as the area of interest (AOI) for this research. Multi-modal, multi-temporal, and multi-resolution geospatial overhead imagery is aggregated from a variety of publicly available sources and processed to create a controlled image dataset called Jerusalem northeast rural controlled imagery (JNE RCI). JNE RCI is tested with handcrafted feature-based methods SURF and SIFT and a non-handcrafted feature-based pre-trained fine-tuned VGG-16 DLCNN on coarse-grained image registration. Both handcrafted and non-handcrafted feature based methods had difficulty with the coarse-grained registration process. The format of JNE RCI is determined to be unsuitable for the coarse-grained registration process with DLCNNs and the process to create a new supervised machine learning dataset, Jerusalem northeast machine learning (JNE ML) is covered in detail. A multi-resolution grid based approach is used, where each grid cell ID is treated as the supervised training label for that respective resolution. Pre-trained fine-tuned VGG-16 DLCNNs, two custom architecture two-channel DLCNNs, and a custom chain DLCNN are trained on JNE ML for each spatial resolution of subimages in the dataset. All DLCNNs used could more accurately coarsely register the JNE ML subimages compared to the pre-trained fine-tuned VGG-16 DLCNN on JNE RCI. This shows the process for creating JNE ML is valid and is suitable for using machine learning with the coarse-grained registration problem. All custom architecture two-channel DLCNNs and the custom chain DLCNN were able to more accurately coarsely register the JNE ML subimages compared to the fine-tuned pre-trained VGG-16 approach. Both the two-channel custom DLCNNs and the chain DLCNN were able to generalize well to new imagery that these networks had not previously trained on. Through the contributions of this research, a foundation is laid for future work to be conducted on the UAS global localization problem within the rural forested JNE AOI

    Camera-independent learning and image quality assessment for super-resolution

    Get PDF
    An increasing number of applications require high-resolution images in situations where the access to the sensor and the knowledge of its specifications are limited. In this thesis, the problem of blind super-resolution is addressed, here defined as the estimation of a high-resolution image from one or more low-resolution inputs, under the condition that the degradation model parameters are unknown. The assessment of super-resolved results, using objective measures of image quality, is also addressed.Learning-based methods have been successfully applied to the single frame super-resolution problem in the past. However, sensor characteristics such as the Point Spread Function (PSF) must often be known. In this thesis, a learning-based approach is adapted to work without the knowledge of the PSF thus making the framework camera-independent. However, the goal is not only to super-resolve an image under this limitation, but also to provide an estimation of the best PSF, consisting of a theoretical model with one unknown parameter.In particular, two extensions of a method performing belief propagation on a Markov Random Field are presented. The first method finds the best PSF parameter by performing a search for the minimum mean distance between training examples and patches from the input image. In the second method, the best PSF parameter and the super-resolution result are found simultaneously by providing a range of possible PSF parameters from which the super-resolution algorithm will choose from. For both methods, a first estimate is obtained through blind deconvolution and an uncertainty is calculated in order to restrict the search.Both camera-independent adaptations are compared and analyzed in various experiments, and a set of key parameters are varied to determine their effect on both the super-resolution and the PSF parameter recovery results. The use of quality measures is thus essential to quantify the improvements obtained from the algorithms. A set of measures is chosen that represents different aspects of image quality: the signal fidelity, the perceptual quality and the localization and scale of the edges.Results indicate that both methods improve similarity to the ground truth and can in general refine the initial PSF parameter estimate towards the true value. Furthermore, the similarity measure results show that the chosen learning-based framework consistently improves a measure designed for perceptual quality

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Image and Video Forensics

    Get PDF
    Nowadays, images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, and wearable and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms, determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques. In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical evidence, forensic technologies that help to determine the origin, authenticity, and integrity of multimedia content can become essential tools. This book aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity
    corecore