2,700 research outputs found
Application Of Polarimetric SAR For Surface Parameter Inversion And Land Cover Mapping Over Agricultural Areas
In this thesis, novel methodology is developed to extract surface parameters under vegetation cover and to map crop types, from the polarimetric Synthetic Aperture Radar (PolSAR) images over agricultural areas. The extracted surface parameters provide crucial information for monitoring crop growth, nutrient release efficiency, water capacity, and crop production. To estimate surface parameters, it is essential to remove the volume scattering caused by the crop canopy, which makes developing an efficient volume scattering model very critical.
In this thesis, a simplified adaptive volume scattering model (SAVSM) is developed to describe the vegetation scattering as crop changes over time through considering the probability density function of the crop orientation. The SAVSM achieved the best performance in fields of wheat, soybean and corn at various growth stages being in convert with the crop phenological development compared with current models that are mostly suitable for forest canopy.
To remove the volume scattering component, in this thesis, an adaptive two-component model-based decomposition (ATCD) was developed, in which the surface scattering is a X-Bragg scattering, whereas the volume scattering is the SAVSM. The volumetric soil moisture derived from the ATCD is more consistent with the verifiable ground conditions compared with other model-based decomposition methods with its RMSE improved significantly decreasing from 19 [vol.%] to 7 [vol.%].
However, the estimation by the ATCD is biased when the measured soil moisture is greater than 30 [vol.%]. To overcome this issue, in this thesis, an integrated surface parameter inversion scheme (ISPIS) is proposed, in which a calibrated Integral Equation Model together with the SAVSM is employed. The derived soil moisture and surface roughness are more consistent with verifiable observations with the overall RMSE of 6.12 [vol.%] and 0.48, respectively
Attentive Tensor Product Learning
This paper proposes a new architecture - Attentive Tensor Product Learning
(ATPL) - to represent grammatical structures in deep learning models. ATPL is a
new architecture to bridge this gap by exploiting Tensor Product
Representations (TPR), a structured neural-symbolic model developed in
cognitive science, aiming to integrate deep learning with explicit language
structures and rules. The key ideas of ATPL are: 1) unsupervised learning of
role-unbinding vectors of words via TPR-based deep neural network; 2) employing
attention modules to compute TPR; and 3) integration of TPR with typical deep
learning architectures including Long Short-Term Memory (LSTM) and Feedforward
Neural Network (FFNN). The novelty of our approach lies in its ability to
extract the grammatical structure of a sentence by using role-unbinding
vectors, which are obtained in an unsupervised manner. This ATPL approach is
applied to 1) image captioning, 2) part of speech (POS) tagging, and 3)
constituency parsing of a sentence. Experimental results demonstrate the
effectiveness of the proposed approach
AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks
In this paper, we propose an Attentional Generative Adversarial Network
(AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained
text-to-image generation. With a novel attentional generative network, the
AttnGAN can synthesize fine-grained details at different subregions of the
image by paying attentions to the relevant words in the natural language
description. In addition, a deep attentional multimodal similarity model is
proposed to compute a fine-grained image-text matching loss for training the
generator. The proposed AttnGAN significantly outperforms the previous state of
the art, boosting the best reported inception score by 14.14% on the CUB
dataset and 170.25% on the more challenging COCO dataset. A detailed analysis
is also performed by visualizing the attention layers of the AttnGAN. It for
the first time shows that the layered attentional GAN is able to automatically
select the condition at the word level for generating different parts of the
image
- …