498 research outputs found
Using Deep Ensemble Forest for High Resolution Mapping of PM2.5 from MODIS MAIAC AOD in Tehran, Iran
High resolution mapping of PM2.5 concentration over Tehran city is
challenging because of the complicated behavior of numerous sources of
pollution and the insufficient number of ground air quality monitoring
stations. Alternatively, high resolution satellite Aerosol Optical Depth (AOD)
data can be employed for high resolution mapping of PM2.5. For this purpose,
different data-driven methods have been used in the literature. Recently, deep
learning methods have demonstrated their ability to estimate PM2.5 from AOD
data. However, these methods have several weaknesses in solving the problem of
estimating PM2.5 from satellite AOD data. In this paper, the potential of the
deep ensemble forest method for estimating the PM2.5 concentration from AOD
data was evaluated. The results showed that the deep ensemble forest method
with R2 = 0.74 gives a higher accuracy of PM2.5 estimation than deep learning
methods (R2 = 0.67) as well as classic data-driven methods such as random
forest (R2 = 0.68). Additionally, the estimated values of PM2.5 using the deep
ensemble forest algorithm were used along with ground data to generate a high
resolution map of PM2.5. Evaluation of the produced PM2.5 map revealed the good
performance of the deep ensemble forest for modeling the variation of PM2.5 in
the city of Tehran
Fusion of Urban TanDEM-X raw DEMs using variational models
Recently, a new global Digital Elevation Model (DEM) with pixel spacing of
0.4 arcseconds and relative height accuracy finer than 2m for flat areas
(slopes 20%) was created
through the TanDEM-X mission. One important step of the chain of global DEM
generation is to mosaic and fuse multiple raw DEM tiles to reach the target
height accuracy. Currently, Weighted Averaging (WA) is applied as a fast and
simple method for TanDEM-X raw DEM fusion in which the weights are computed
from height error maps delivered from the Interferometric TanDEM-X Processor
(ITP). However, evaluations show that WA is not the perfect DEM fusion method
for urban areas especially in confrontation with edges such as building
outlines. The main focus of this paper is to investigate more advanced
variational approaches such as TV-L1 and Huber models. Furthermore, we also
assess the performance of variational models for fusing raw DEMs produced from
data takes with different baseline configurations and height of ambiguities.
The results illustrate the high efficiency of variational models for TanDEM-X
raw DEM fusion in comparison to WA. Using variational models could improve the
DEM quality by up to 2m particularly in inner-city subsets.Comment: This is the pre-acceptance version, to read the final version, please
go to IEEE Journal of Selected Topics in Applied Earth Observations and
Remote Sensing on IEEE Xplor
The Approximate Capacity Region of the Gaussian Z-Interference Channel with Conferencing Encoders
A two-user Gaussian Z-Interference Channel (GZIC) is considered, in which
encoders are connected through noiseless links with finite capacities. In this
setting, prior to each transmission block the encoders communicate with each
other over the cooperative links. The capacity region and the sum-capacity of
the channel are characterized within 1.71 bits per user and 2 bits in total,
respectively. It is also established that properly sharing the total limited
cooperation capacity between the cooperative links may enhance the achievable
region, even when compared to the case of unidirectional transmitter
cooperation with infinite cooperation capacity. To obtain the results,
genie-aided upper bounds on the sum-capacity and cut-set bounds on the
individual rates are compared with the achievable rate region. In the
interference-limited regime, the achievable scheme enjoys a simple type of
Han-Kobayashi signaling, together with the zero-forcing, and basic relaying
techniques. In the noise-limited regime, it is shown that treating interference
as noise achieves the capacity region up to a single bit per user.Comment: 25 pages, 6 figures, submitted to IEEE Transactions on Information
Theor
A Framework for SAR-Optical Stereogrammetry over Urban Areas
Currently, numerous remote sensing satellites provide a huge volume of
diverse earth observation data. As these data show different features regarding
resolution, accuracy, coverage, and spectral imaging ability, fusion techniques
are required to integrate the different properties of each sensor and produce
useful information. For example, synthetic aperture radar (SAR) data can be
fused with optical imagery to produce 3D information using stereogrammetric
methods. The main focus of this study is to investigate the possibility of
applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical
image pairs. For this purpose, the applicability of semi-global matching is
investigated in this unconventional multi-sensor setting. To support the image
matching by reducing the search space and accelerating the identification of
correct, reliable matches, the possibility of establishing an epipolarity
constraint for VHR SAR-optical image pairs is investigated as well. In
addition, it is shown that the absolute geolocation accuracy of VHR optical
imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be
improved by a multi-sensor block adjustment formulation based on rational
polynomial coefficients. Finally, the feasibility of generating point clouds
with a median accuracy of about 2m is demonstrated and confirms the potential
of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please
go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec
Nighttime Driver Behavior Prediction Using Taillight Signal Recognition via CNN-SVM Classifier
This paper aims to enhance the ability to predict nighttime driving behavior
by identifying taillights of both human-driven and autonomous vehicles. The
proposed model incorporates a customized detector designed to accurately detect
front-vehicle taillights on the road. At the beginning of the detector, a
learnable pre-processing block is implemented, which extracts deep features
from input images and calculates the data rarity for each feature. In the next
step, drawing inspiration from soft attention, a weighted binary mask is
designed that guides the model to focus more on predetermined regions. This
research utilizes Convolutional Neural Networks (CNNs) to extract
distinguishing characteristics from these areas, then reduces dimensions using
Principal Component Analysis (PCA). Finally, the Support Vector Machine (SVM)
is used to predict the behavior of the vehicles. To train and evaluate the
model, a large-scale dataset is collected from two types of dash-cams and
Insta360 cameras from the rear view of Ford Motor Company vehicles. This
dataset includes over 12k frames captured during both daytime and nighttime
hours. To address the limited nighttime data, a unique pixel-wise image
processing technique is implemented to convert daytime images into realistic
night images. The findings from the experiments demonstrate that the proposed
methodology can accurately categorize vehicle behavior with 92.14% accuracy,
97.38% specificity, 92.09% sensitivity, 92.10% F1-measure, and 0.895 Cohen's
Kappa Statistic. Further details are available at
https://github.com/DeepCar/Taillight_Recognition.Comment: 12 pages, 10 figure
- …