16 research outputs found
Exploiting Deep Features for Remote Sensing Image Retrieval: A Systematic Investigation
Remote sensing (RS) image retrieval is of great significant for geological
information mining. Over the past two decades, a large amount of research on
this task has been carried out, which mainly focuses on the following three
core issues: feature extraction, similarity metric and relevance feedback. Due
to the complexity and multiformity of ground objects in high-resolution remote
sensing (HRRS) images, there is still room for improvement in the current
retrieval approaches. In this paper, we analyze the three core issues of RS
image retrieval and provide a comprehensive review on existing methods.
Furthermore, for the goal to advance the state-of-the-art in HRRS image
retrieval, we focus on the feature extraction issue and delve how to use
powerful deep representations to address this task. We conduct systematic
investigation on evaluating correlative factors that may affect the performance
of deep features. By optimizing each factor, we acquire remarkable retrieval
results on publicly available HRRS datasets. Finally, we explain the
experimental phenomenon in detail and draw conclusions according to our
analysis. Our work can serve as a guiding role for the research of
content-based RS image retrieval
Aggregated Deep Local Features for Remote Sensing Image Retrieval
Remote Sensing Image Retrieval remains a challenging topic due to the special
nature of Remote Sensing Imagery. Such images contain various different
semantic objects, which clearly complicates the retrieval task. In this paper,
we present an image retrieval pipeline that uses attentive, local convolutional
features and aggregates them using the Vector of Locally Aggregated Descriptors
(VLAD) to produce a global descriptor. We study various system parameters such
as the multiplicative and additive attention mechanisms and descriptor
dimensionality. We propose a query expansion method that requires no external
inputs. Experiments demonstrate that even without training, the local
convolutional features and global representation outperform other systems.
After system tuning, we can achieve state-of-the-art or competitive results.
Furthermore, we observe that our query expansion method increases overall
system performance by about 3%, using only the top-three retrieved images.
Finally, we show how dimensionality reduction produces compact descriptors with
increased retrieval performance and fast retrieval computation times, e.g. 50%
faster than the current systems.Comment: Published in Remote Sensing. The first two authors have equal
contributio
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
κ΅¬κΈ ν¬ν λ΄ κ²μ κΈ°λ₯μ ν΅ν λμ°ΎκΈ°(Retrieval)μ νμ΅μ μ€μ¬μΌλ‘
νμλ
Όλ¬Έ(μμ¬)--μμΈλνκ΅ λνμ :μ΅ν©κ³ΌνκΈ°μ λνμ μ΅ν©κ³ΌνλΆ(λμ§νΈμ 보μ΅ν©μ 곡),2019. 8. μ΄μ€μ.The practices of photo retrieving on personal smartphones have extended. Not only pictures are browsed by scrolling up and down, but also a picture is easily come out as a result of typing a certain keyword. The technology of object recognition has changed how people look and browse personal photos; it not only classifies similar photos, but also assigns the labels that represent the referent of the classification. For instance, Google Photos has applied the object recognition and search system to allow users to manage personal photos. Eventually, the novel use of searching photos on personal album is expected to change the aspects of how people retrieve a particular photo out of thousands of accumulated ones in their cloud system.
However, the novel technology is on its early stage and is at lack of leaving positive impression to the users. There is a gap between object recognition executed by the device and the user. When typing a query on its search bar for instance, the result is either none or countless number of results.
The purpose of this study is to identify of the points of inconvenience in smartphone photo albums using object recognition and to deliver a better photo search user experience. In this paper, previous studies and preliminary research were thoroughly reviewed and held to fully grasp understanding of the inner workings of the object recognition, and to build a general frame of how people use the personal photo search system. In the main research, six photos search tasks were designed for a week with a total of 16 participants, aged between 20s and 30s. Search strategy tips were given to the experiment participants in order to collect particular strategy when searching photos. After collecting a total of 672 search tasks and used strategies, a post-questionnaire was followed.
As a result of the analysis, the learning process of search system of the users has occurred. The study was able to identify how users learned the functions of photo search through strategy. As the number of 42 retrieval tasks, the average retrieval time of 16 users gradually decreased. As a result, the average retrieval time of the last day compared to the first day decreased by 31% from 51 seconds to 35 seconds. The average success rate of searches also rose by about 11% over 42 tasks performed each day for a week. The average number of search attempts of participants decreased by 28%. As the experience of photo retrieval is accumulated with the strategy provided, it is confirmed that the image of the learning and the improvement of the retrieval are improved in the photo retrieval using object recognition technology.
In the learning style of individual users, 12 out of 16 participants participated in the appearance of learning, and 3 showed that learning did not take place. The other one showed no influence on learning. If learning is not or is not affected, it can be inferred that there is a difference in learning depending on which strategies are used in the initial search and how to adapt to the search function.
Finally, 44.35% of the total search strategies were applied to the most commonly used strategies: 'use the correct name by using the high-level word (abstract concept) and the low-level word (concrete concept)'. Next was followed by using '(comma)', 'using search terms that appear on the screen like color', and 'using figures (women, men)'. The strategy that users individually formed was 12.20% of the total, and there was no difference in use over time. As for the strategies that the user made, 39.47% of the strategies were 'utilization of administrative area names', followed by 'automatic classification of people' and 'utilization of buildings'. Among the strategies that have been developed, 'Word usage tailored to Google Photos' has been found to recognize the characteristics of object recognition through experiences and form a search word by predicting it. In other words, as the experience of photo retrieval using object recognition accumulates, it shows the understanding of its characteristics.
Through the above analysis, the study has examined the point where object recognition technology is difficult for user when it is used as a search in the smartphone photo album, and added a brief suggestion on how to supplement it.
This study approached the user 's difficulty in using the object recognition technology as applied to the smartphone photo album. In addition, HCI (Human Computer Interaction) side has focused on the process of how the strategy made through the viewpoint of the device is accepted and transformed by the user. In addition, it is meaningful that the study tried to observe the interaction that occurred when the research of object recognition, which was concentrated only on improving the accuracy of recognition, was provided to actual users. Finally, it is meaningful that the study discussed the ways to utilize object recognition in order to utilize the medium of photography and sustainable use.μ§λ λ΄μ 촬μν λ²κ½ μ¬μ§μ΄ μ°Ύκ³ μΆμ λ, μ€λ§νΈν°μ μ¬μ§μ²©μμ λ²κ½μ κ²μν΄λ³Έλ€. λ€μ΄μ΄ μ¨λ² μμ μλ λͺ¨λ λ²κ½ μ¬μ§μ΄ λνλλ€. μ΄μ²λΌ, μ€λ§νΈν°μΌλ‘ 촬μν λμ μ¬μ§λ μ΄μ ν€μλλ₯Ό ν΅ν΄ κ²μμ΄ κ°λ₯νλ€. κ³Όκ±°μλ μ¨λ²μ λ§λ€μ΄μ κ΄λ¦¬νκ±°λ μλ§μ μ¬μ§μ νμ μμμ μ€ν¬λ‘€ λ΄λ €κ°λ©° λΈλΌμ°μ§νλ€λ©΄, μ΄μ λ 머리 μμ λ μ€λ₯΄λ κ²μμ΄λ₯Ό νμ©ν΄ μνλ μ¬μ§μ μ°Ύμ μ μλ κ²μ΄λ€. μ΄λ μ»΄ν¨ν° λΉμ λΆμΌμμ κ°μ²΄ μΈμ κΈ°μ μ΄ λΉμ½μ μΈ λ°μ μ μ΄λ£¨κ³ , λΉμ·ν νΉμ§μ κ°μ§λ μ¬μ§μ λ¬Άμ΄λΈ ν μ ν©ν μ΄λ¦μ λΆμ΄λ λ°κΉμ§ κ°λ₯ν΄μ§ λλΆμ΄λ€. κ·Έλ¬λ, κ²μμ νμ©ν΄λ³΄λ©΄ κ·Έ κ²°κ³Όμ λ²μκ° λ무 λμ΄μ κ²μμ ν¨κ³Όκ° μκ±°λ, μ¬μ©μκ° νμ©ν κ²μμ΄μ λΆν©νλ κ²°κ³Όκ° λμ€μ§ μλ κ²½μ°λ μλ€. κ±°κΈ°μ μ νμ§ μμ κ²°κ³Όκ° λνλ λλ μμ΄ μ¬μ§ κ²μμ νμ©νλ μ΄κΈ°μ μ¬μ©μμκ² λΆμ μ μΈ κ²½νμ λ¨κΈ°κ²λλ€.
μ΄μ λ³Έ μ°κ΅¬λ κ°μ²΄ μΈμμ νμ©ν μ€λ§νΈν° μ¬μ§μ²©μμ λνλλ λΆνΈν¨μ μ§μ μ λ°νλ΄κ³ , μ¬μ§ κ²μ κ²½νμ κ°μ νκ³ μ νλ λͺ©μ μ κ°κ³ μμλμλ€. κ°μ²΄ μΈμμ΄ μ€λ§νΈν° μ¬μ§ κ²μκ³Ό κ°μ μ€μνμ μ μ©λμμ λ λνλλ μ΄λ¬ν μ΄μν κ²°κ³Όλ μ¬μ§μ΄λΌλ λμμ νΉμ±κ³Ό κΈ°κΈ°μ λ΄λΆμ νΉμ±, μ΄λ‘ μΈν΄ λ°μνλ κΈ°κΈ°μ μ¬λ μ¬μ΄μ μΈμ μ°¨μ΄λ‘ μΈν΄ λ°μνλ€. λ³Έλ¬Έμμλ κ°μ²΄ μΈμμ λ΄λΆμ νΉμ±μ λ¬Έν μ°κ΅¬μ μ¬μ μ‘°μ¬λ₯Ό ν΅ν΄ μμ§ν ν, κ²μμ λ³΄λ€ ν¨μ¨μ μΌλ‘ ν μ μλ μ λ΅μΌλ‘ μΌλ°ν νμλ€. μ΄ν μ¬μ©μμκ² μ¬μ§ κ²μμ κ²½ν μ€ μ΄λ¬ν νΉμ±μ κ²μμ μ λ΅μΌλ‘ νμ©ν μ μλλ‘ ν(Tip)μ μμ±ν ν μ΄λ₯Ό μ€νμ μ°Έμ¬μμκ² μ 곡νμλ€. μ΄ν 16 λͺ
μ 20-30 λλ₯Ό λμμΌλ‘ μΌμ£ΌμΌλμ λ§€μΌ μ¬μ―λ²μ μ¬μ§ μ°ΎκΈ° κ³Όμ
μ μννλ μ€νμ ν΅ν΄ μ°κ΅¬ μλ£λ₯Ό μμ§νμλ€. μ΄λ₯Ό ν΅ν΄ μ΄ 672κ°μ κ²μ κ³Όμ
μ κΈ°λ‘κ³Ό μ¬μ©λ μ λ΅, κ·Έλ¦¬κ³ μ¬μ©μκ° νμ±ν μ λ΅μ μμ§νμΌλ©°, μΌμ£ΌμΌκ°μ μ€νμ΄ μλ£λ νμλ μ¬ν μ€λ¬Έ λ°μ΄ν°λ₯Ό μ»μ μ μμλ€.
λΆμμ κ²°κ³Όλ‘, λ¨Όμ μ λ΅μ ν΅ν΄ μ¬μ©μκ° μ¬μ§ κ²μμ κΈ°λ₯μ νμ΅νλ λͺ¨μ΅μ νμΈν μ μμλ€. 42 νμ κ²μ νμ€ν¬κ° λμ λ¨μ λ°λΌ 16 λͺ
μ νκ· κ²μ μμ μκ°μ μ μ°¨ κ°μνμμΌλ©°, μ΄λ₯Ό μκ°μ νλ¦μ λ°λ₯Έ λ³νμ μΆμμ μ΄ν΄λ³΄λ©΄ 첫 λ²μ§Έ λ μ λΉν΄ λ§μ§λ§ λ μ νκ· κ²μ μμ μκ°μ΄ 51 μ΄μμ 35 μ΄λ‘ 31% κ°λ κ°μνλ€. κ²μμ νκ· μ±κ³΅μ¨ λν μΌμ£ΌμΌ λμ λ§€μΌ μ§νλ 42 νμ νμ€ν¬μ κ±Έμ Έ μ½ 11% μμΉνλ λͺ¨μ΅μ 보μλ€. μ€ν μ°Έμ¬μμ νκ· κ²μ μλ νμλ 28% κ°μνμλ€. μ΄λ₯Ό ν΅ν΄ μ 곡λ μ λ΅κ³Ό ν¨κ» μ¬μ§ κ²μμ κ²½νμ΄ λμ λ μλ‘ κ°μ²΄ μΈμ κΈ°μ μ νμ©ν μ¬μ§ κ²μμμ νμ΅μ λͺ¨μ΅κ³Ό κ²μμ κ°μ μ΄ λνλ¨μ νμΈν μ μμλ€.
κ°λ³ μ¬μ©μμ νμ΅ ννμμλ, 16λͺ
μ€ 12λͺ
μ μ°Έμ¬μκ° νμ΅μ λͺ¨μ΅μ΄ λνλ κ²½μ°μ ν΄λΉλμμΌλ©°, 3 λͺ
μ νμ΅μ΄ μ΄λ€μ§μ§ μμ λͺ¨μ΅μ 보μλ€. λλ¨Έμ§ 1 λͺ
μ νμ΅μ μν₯μ λ°μ§ μλ λͺ¨μ΅μ λνλ΄μλ€. νμ΅μ΄ μ΄λ€μ§μ§ μκ±°λ μν₯μ λ°μ§ μλ κ²½μ°, μ΄λ°μ κ²μμμ μ΄λ€ μ λ΅μ νμ©νκ³ κ²μ κΈ°λ₯μ μ μνλμ§μ λ°λΌ νμ΅μμμ μ°¨μ΄κ° μκΈ°λ κ²μΌλ‘ μ μΆν΄λ³Ό μ μμλ€.
λ§μ§λ§μΌλ‘ μ 곡λ μ λ΅ μ€ κ°μ₯ λ§μ΄ μ¬μ©λ μ λ΅μ μμμ΄(μμ κ°λ
μ λ¨μ΄), νμμ΄(ꡬ체μ μΈ λ¨μ΄)λ₯Ό νμ©νμ¬ μ νν λͺ
μΉ μ¬μ©νκΈ°λ‘ μ 체 κ²μμ 44.35%μ΄ ν΄λΉ μ λ΅μ νμ©ν κ²μΌλ‘ λνλ¬λ€. κ·Έ λ€μμΌλ‘λ ,(μ½€λ§) μ¬μ©νκΈ°, μμκ³Ό κ°μ΄ νλ©΄μμ λλ¬λλ κ²μμ΄ νμ©νκΈ°, μΈλ¬Ό(μ¬μ, λ¨μ) νμ©νκΈ°κ° λ€λ₯Ό μ΄μλ€. μ¬μ©μκ° κ°λ³μ μΌλ‘ νμ±ν μ λ΅μ μ 체μ 12.20%λ‘ λνλ¬μΌλ©°, μκ°μ΄ νλ¦μ λ°λΌ κ·Έ μ¬μ©μ μ°¨μ΄λ λνλμ§ μμλ€. μ¬μ©μκ° μ§μ μ μν μ λ΅μΌλ‘λ νμ ꡬμλͺ
μ νμ©μ΄ κ°λ³ μ λ΅μ 39.47%λ₯Ό μ°¨μ§νμΌλ©°, κ·Έ λ€μμΌλ‘λ μλ λΆλ₯λ μΈλ¬Ό νμ©, κ·Έλ¦¬κ³ κ±΄μΆλ¬Όμ νμ©μ΄ μ£Όλ‘ νμ©λμλ€. νμ±λ μ λ΅ μ€ κ΅¬κΈ ν¬ν μ λ§μΆ λ¨μ΄ νμ©μ κ³Όκ±°μ κ²½νμ ν΅ν΄ κ°μ²΄ μΈμμ νΉμ±μ μΈμ§νκ³ , μ΄λ₯Ό μμΈ‘νμ¬ κ²μμ΄λ₯Ό νμ±νλ κ²μΌλ‘ λνλ¬λ€. μ¦, κ°μ²΄ μΈμμ νμ©ν μ¬μ§ κ²μμ κ²½νμ λμ ν μλ‘ κ·Έ νΉμ±μ λν μ΄ν΄κ° μκΈ°λ λͺ¨μ΅μ 보μ¬μ€λ€.
μ΄μμ λΆμμ ν΅ν΄ κ°μ²΄ μΈμ κΈ°μ μ΄ μ€λ§νΈν° μ¬μ§μ²© λ΄μμ κ²μμΌλ‘ νμ©λ λ μ¬μ©μμκ² μ΄λ €μμ κ°μ Έμ€λ μ§μ μ λν΄ μ΄ν΄λ³Έ ν, μ΄λ₯Ό 보μν μ μλ λ°©μμ λν κ°λ΅ν μ μΈμ λ§λΆμλ€.
λ³Έ μ°κ΅¬λ κ°μ²΄ μΈμ κΈ°μ μ΄ μ€λ§νΈν° μ¬μ§μ²©μ μ μ©λλ©΄μ λνλ μ¬μ©μ μ΄λ €μμ μ¬μ©μμ κ΄μ μμ μ κ·Όνμλ€. λνμ¬ HCI(Human Computer Interaction)μ μΈ‘λ³μμ, κΈ°κΈ°μ κ΄μ μ ν΅ν΄ μ μλ μ λ΅μ΄ μ΄λ»κ² μ¬μ©μμκ² μμ©λκ³ λ³νλλμ§ κ·Έ κ³Όμ μ μ§μ€νλ€λλ° κ·Έ μ΅ν©μ μμλ₯Ό κ°μ§λ€. λνμ¬ μΈμμ μ νλλ₯Ό λμ΄λλ°μλ§ μ§μ€λμλ κ°μ²΄ μΈμμ μ°κ΅¬κ° μ€μ μ¬μ©μμκ² μ 곡λμμ λ λ°μνλ μΈν°λμ
μ μ€νμ ν΅ν΄ κ΄μ°°μ μλνλ€λλ° μμλ₯Ό κ°λλ€. λ§μ§λ§μΌλ‘λ μ¬μ§μ΄λΌλ 맀체μ νμ©μ±κ³Ό μ§μ κ°λ₯ν νμ©μ μν΄ κ°μ²΄ μΈμμ νμ©ν μ μλ λ°©μμ λν λ
Όμλ₯Ό νΌμ³€λ€λ λ° μμκ° μλ€.μ 1 μ₯ μλ‘ 1
μ 1 μ μ°κ΅¬μ λ°°κ²½ 1
μ 2 μ μ°κ΅¬μ λͺ©μ 2
μ 2 μ₯ κ΄λ ¨ μ°κ΅¬ 4
μ 1 μ μ΄λ―Έμ§ κ²μμμ κ°μ²΄ μΈμμ νμ© 8
μ 2 μ κΈ°κΈ°μ μΈκ°μ κ΄μ μ°¨μ΄ 11
μ 3 μ μ¬μ§μ νΉμ±κ³Ό μ€λ§νΈν° μ¬μ§ μ°ΎκΈ° 14
μ 3 μ₯ μ°κ΅¬ λ¬Έμ 19
μ 1 μ μ°κ΅¬ λ¬Έμ μ μ€μ 19
μ 2 μ μΈ‘μ μ μν κ°λ
μ μ 21
μ 4 μ₯ μ°κ΅¬ λ°©λ² 24
μ 1 μ μ°κ΅¬ λ°©λ² 24
μ 2 μ μ€ν λ°©λ² 25
μ 5 μ₯ μ°κ΅¬ κ²°κ³Ό 41
μ 1 μ κ°μ²΄ μΈμμ νμ©ν μ¬μ§ μ°ΎκΈ°μ νμ΅ 41
μ 2 μ μ¬μ©μ λ³ νμ΅μ ννμ κ·Έ νΉμ± 48
μ 3 μ μ 곡λ μ λ΅μ νμ©κ³Ό λ³ν 52
μ 6 μ₯ μ°κ΅¬ λ
Όμ 63
μ 1 μ μ°κ΅¬μμ λνλ κΈ°κΈ°μ μΈκ°μ μ°¨μ΄ 63
μ 7 μ₯ κ²°λ‘ λ° μ°κ΅¬μ μμ 67
μ 1 μ μ°κ΅¬μ μμ½ 67
μ 2 μ μ°κ΅¬μ νκ³ λ° μ μΈ 70
μ 3 μ μ°κ΅¬μ μμ 72
μ°Έκ³ λ¬Έν 74
Abstract 78Maste
Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval
Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNNs) for high-resolution remote sensing image retrieval (HRRSIR). To this end, several effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, a CNN pre-trained on a different problem is treated as a feature extractor since there are no sufficiently-sized remote sensing datasets to train a CNN from scratch. In the second scheme, we investigate learning features that are specific to our problem by first fine-tuning the pre-trained CNN on a remote sensing dataset and then proposing a novel CNN architecture based on convolutional layers and a three-layer perceptron. The novel CNN has fewer parameters than the pre-trained and fine-tuned CNNs and can learn low dimensional features from limited labelled images. The schemes are evaluated on several challenging, publicly available datasets. The results indicate that the proposed schemes, particularly the novel CNN, achieve state-of-the-art performance
Toward Global Localization of Unmanned Aircraft Systems using Overhead Image Registration with Deep Learning Convolutional Neural Networks
Global localization, in which an unmanned aircraft system (UAS) estimates its unknown current location without access to its take-off location or other locational data from its flight path, is a challenging problem. This research brings together aspects from the remote sensing, geoinformatics, and machine learning disciplines by framing the global localization problem as a geospatial image registration problem in which overhead aerial and satellite imagery serve as a proxy for UAS imagery. A literature review is conducted covering the use of deep learning convolutional neural networks (DLCNN) with global localization and other related geospatial imagery applications. Differences between geospatial imagery taken from the overhead perspective and terrestrial imagery are discussed, as well as difficulties in using geospatial overhead imagery for image registration due to a lack of suitable machine learning datasets. Geospatial analysis is conducted to identify suitable areas for future UAS imagery collection. One of these areas, Jerusalem northeast (JNE) is selected as the area of interest (AOI) for this research. Multi-modal, multi-temporal, and multi-resolution geospatial overhead imagery is aggregated from a variety of publicly available sources and processed to create a controlled image dataset called Jerusalem northeast rural controlled imagery (JNE RCI). JNE RCI is tested with handcrafted feature-based methods SURF and SIFT and a non-handcrafted feature-based pre-trained fine-tuned VGG-16 DLCNN on coarse-grained image registration. Both handcrafted and non-handcrafted feature based methods had difficulty with the coarse-grained registration process. The format of JNE RCI is determined to be unsuitable for the coarse-grained registration process with DLCNNs and the process to create a new supervised machine learning dataset, Jerusalem northeast machine learning (JNE ML) is covered in detail. A multi-resolution grid based approach is used, where each grid cell ID is treated as the supervised training label for that respective resolution. Pre-trained fine-tuned VGG-16 DLCNNs, two custom architecture two-channel DLCNNs, and a custom chain DLCNN are trained on JNE ML for each spatial resolution of subimages in the dataset. All DLCNNs used could more accurately coarsely register the JNE ML subimages compared to the pre-trained fine-tuned VGG-16 DLCNN on JNE RCI. This shows the process for creating JNE ML is valid and is suitable for using machine learning with the coarse-grained registration problem. All custom architecture two-channel DLCNNs and the custom chain DLCNN were able to more accurately coarsely register the JNE ML subimages compared to the fine-tuned pre-trained VGG-16 approach. Both the two-channel custom DLCNNs and the chain DLCNN were able to generalize well to new imagery that these networks had not previously trained on. Through the contributions of this research, a foundation is laid for future work to be conducted on the UAS global localization problem within the rural forested JNE AOI