4,866 research outputs found
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm
DoPAMINE: Double-sided Masked CNN for Pixel Adaptive Multiplicative Noise Despeckling
We propose DoPAMINE, a new neural network based multiplicative noise
despeckling algorithm. Our algorithm is inspired by Neural AIDE (N-AIDE), which
is a recently proposed neural adaptive image denoiser. While the original
N-AIDE was designed for the additive noise case, we show that the same
framework, i.e., adaptively learning a network for pixel-wise affine denoisers
by minimizing an unbiased estimate of MSE, can be applied to the multiplicative
noise case as well. Moreover, we derive a double-sided masked CNN architecture
which can control the variance of the activation values in each layer and
converge fast to high denoising performance during supervised training. In the
experimental results, we show our DoPAMINE possesses high adaptivity via
fine-tuning the network parameters based on the given noisy image and achieves
significantly better despeckling results compared to SAR-DRN, a
state-of-the-art CNN-based algorithm.Comment: AAAI 2019 Camera Ready Versio
Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability
Internet-of-Things (IoT) envisions an intelligent infrastructure of networked
smart devices offering task-specific monitoring and control services. The
unique features of IoT include extreme heterogeneity, massive number of
devices, and unpredictable dynamics partially due to human interaction. These
call for foundational innovations in network design and management. Ideally, it
should allow efficient adaptation to changing environments, and low-cost
implementation scalable to massive number of devices, subject to stringent
latency constraints. To this end, the overarching goal of this paper is to
outline a unified framework for online learning and management policies in IoT
through joint advances in communication, networking, learning, and
optimization. From the network architecture vantage point, the unified
framework leverages a promising fog architecture that enables smart devices to
have proximity access to cloud functionalities at the network edge, along the
cloud-to-things continuum. From the algorithmic perspective, key innovations
target online approaches adaptive to different degrees of nonstationarity in
IoT dynamics, and their scalable model-free implementation under limited
feedback that motivates blind or bandit approaches. The proposed framework
aspires to offer a stepping stone that leads to systematic designs and analysis
of task-specific learning and management schemes for IoT, along with a host of
new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive
and Scalable Communication Network
Efficient cosmological parameter sampling using sparse grids
We present a novel method to significantly speed up cosmological parameter
sampling. The method relies on constructing an interpolation of the
CMB-log-likelihood based on sparse grids, which is used as a shortcut for the
likelihood-evaluation. We obtain excellent results over a large region in
parameter space, comprising about 25 log-likelihoods around the peak, and we
reproduce the one-dimensional projections of the likelihood almost perfectly.
In speed and accuracy, our technique is competitive to existing approaches to
accelerate parameter estimation based on polynomial interpolation or neural
networks, while having some advantages over them. In our method, there is no
danger of creating unphysical wiggles as it can be the case for polynomial fits
of a high degree. Furthermore, we do not require a long training time as for
neural networks, but the construction of the interpolation is determined by the
time it takes to evaluate the likelihood at the sampling points, which can be
parallelised to an arbitrary degree. Our approach is completely general, and it
can adaptively exploit the properties of the underlying function. We can thus
apply it to any problem where an accurate interpolation of a function is
needed.Comment: Submitted to MNRAS, 13 pages, 13 figure
- …