3,475 research outputs found
Depth from Monocular Images using a Semi-Parallel Deep Neural Network (SPDNN) Hybrid Architecture
Deep neural networks are applied to a wide range of problems in recent years.
In this work, Convolutional Neural Network (CNN) is applied to the problem of
determining the depth from a single camera image (monocular depth). Eight
different networks are designed to perform depth estimation, each of them
suitable for a feature level. Networks with different pooling sizes determine
different feature levels. After designing a set of networks, these models may
be combined into a single network topology using graph optimization techniques.
This "Semi Parallel Deep Neural Network (SPDNN)" eliminates duplicated common
network layers, and can be further optimized by retraining to achieve an
improved model compared to the individual topologies. In this study, four SPDNN
models are trained and have been evaluated at 2 stages on the KITTI dataset.
The ground truth images in the first part of the experiment are provided by the
benchmark, and for the second part, the ground truth images are the depth map
results from applying a state-of-the-art stereo matching method. The results of
this evaluation demonstrate that using post-processing techniques to refine the
target of the network increases the accuracy of depth estimation on individual
mono images. The second evaluation shows that using segmentation data alongside
the original data as the input can improve the depth estimation results to a
point where performance is comparable with stereo depth estimation. The
computational time is also discussed in this study.Comment: 44 pages, 25 figure
Multiscale Latent-Guided Entropy Model for LiDAR Point Cloud Compression
The non-uniform distribution and extremely sparse nature of the LiDAR point
cloud (LPC) bring significant challenges to its high-efficient compression.
This paper proposes a novel end-to-end, fully-factorized deep framework that
encodes the original LPC into an octree structure and hierarchically decomposes
the octree entropy model in layers. The proposed framework utilizes a
hierarchical latent variable as side information to encapsulate the sibling and
ancestor dependence, which provides sufficient context information for the
modelling of point cloud distribution while enabling the parallel encoding and
decoding of octree nodes in the same layer. Besides, we propose a residual
coding framework for the compression of the latent variable, which explores the
spatial correlation of each layer by progressive downsampling, and model the
corresponding residual with a fully-factorized entropy model. Furthermore, we
propose soft addition and subtraction for residual coding to improve network
flexibility. The comprehensive experiment results on the LiDAR benchmark
SemanticKITTI and MPEG-specified dataset Ford demonstrates that our proposed
framework achieves state-of-the-art performance among all the previous LPC
frameworks. Besides, our end-to-end, fully-factorized framework is proved by
experiment to be high-parallelized and time-efficient and saves more than 99.8%
of decoding time compared to previous state-of-the-art methods on LPC
compression
DEEP LEARNING FOR IMAGE RESTORATION AND ROBOTIC VISION
Traditional model-based approach requires the formulation of mathematical model, and the model often has limited performance. The quality of an image may degrade due to a variety of reasons: It could be the context of scene is affected by weather conditions such as haze, rain, and snow; It\u27s also possible that there is some noise generated during image processing/transmission (e.g., artifacts generated during compression.). The goal of image restoration is to restore the image back to desirable quality both subjectively and objectively. Agricultural robotics is gaining interest these days since most agricultural works are lengthy and repetitive. Computer vision is crucial to robots especially the autonomous ones. However, it is challenging to have a precise mathematical model to describe the aforementioned problems. Compared with traditional approach, learning-based approach has an edge since it does not require any model to describe the problem. Moreover, learning-based approach now has the best-in-class performance on most of the vision problems such as image dehazing, super-resolution, and image recognition.
In this dissertation, we address the problem of image restoration and robotic vision with deep learning. These two problems are highly related with each other from a unique network architecture perspective: It is essential to select appropriate networks when dealing with different problems. Specifically, we solve the problems of single image dehazing, High Efficiency Video Coding (HEVC) loop filtering and super-resolution, and computer vision for an autonomous robot. Our technical contributions are threefold: First, we propose to reformulate haze as a signal-dependent noise which allows us to uncover it by learning a structural residual. Based on our novel reformulation, we solve dehazing with recursive deep residual network and generative adversarial network which emphasizes on objective and perceptual quality, respectively. Second, we replace traditional filters in HEVC with a Convolutional Neural Network (CNN) filter. We show that our CNN filter could achieve 7% BD-rate saving when compared with traditional filters such as bilateral and deblocking filter. We also propose to incorporate a multi-scale CNN super-resolution module into HEVC. Such post-processing module could improve visual quality under extremely low bandwidth. Third, a transfer learning technique is implemented to support vision and autonomous decision making of a precision pollination robot. Good experimental results are reported with real-world data
A Robust Approach Towards Distinguishing Natural and Computer Generated Images using Multi-Colorspace fused and Enriched Vision Transformer
The works in literature classifying natural and computer generated images are
mostly designed as binary tasks either considering natural images versus
computer graphics images only or natural images versus GAN generated images
only, but not natural images versus both classes of the generated images. Also,
even though this forensic classification task of distinguishing natural and
computer generated images gets the support of the new convolutional neural
networks and transformer based architectures that can give remarkable
classification accuracies, they are seen to fail over the images that have
undergone some post-processing operations usually performed to deceive the
forensic algorithms, such as JPEG compression, gaussian noise, etc. This work
proposes a robust approach towards distinguishing natural and computer
generated images including both, computer graphics and GAN generated images
using a fusion of two vision transformers where each of the transformer
networks operates in different color spaces, one in RGB and the other in YCbCr
color space. The proposed approach achieves high performance gain when compared
to a set of baselines, and also achieves higher robustness and generalizability
than the baselines. The features of the proposed model when visualized are seen
to obtain higher separability for the classes than the input image features and
the baseline features. This work also studies the attention map visualizations
of the networks of the fused model and observes that the proposed methodology
can capture more image information relevant to the forensic task of classifying
natural and generated images
- …