10,459 research outputs found
A Discriminatively Learned CNN Embedding for Person Re-identification
We revisit two popular convolutional neural networks (CNN) in person
re-identification (re-ID), i.e, verification and classification models. The two
models have their respective advantages and limitations due to different loss
functions. In this paper, we shed light on how to combine the two models to
learn more discriminative pedestrian descriptors. Specifically, we propose a
new siamese network that simultaneously computes identification loss and
verification loss. Given a pair of training images, the network predicts the
identities of the two images and whether they belong to the same identity. Our
network learns a discriminative embedding and a similarity measurement at the
same time, thus making full usage of the annotations. Albeit simple, the
learned embedding improves the state-of-the-art performance on two public
person re-ID benchmarks. Further, we show our architecture can also be applied
in image retrieval
Occlusion Aware Unsupervised Learning of Optical Flow
It has been recently shown that a convolutional neural network can learn
optical flow estimation with unsupervised learning. However, the performance of
the unsupervised methods still has a relatively large gap compared to its
supervised counterpart. Occlusion and large motion are some of the major
factors that limit the current unsupervised learning of optical flow methods.
In this work we introduce a new method which models occlusion explicitly and a
new warping way that facilitates the learning of large motion. Our method shows
promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets.
Especially on KITTI dataset where abundant unlabeled samples exist, our
unsupervised method outperforms its counterpart trained with supervised
learning.Comment: CVPR 2018 Camera-read
Lattice calculation of hadronic tensor of the nucleon
We report an attempt to calculate the deep inelastic scattering structure
functions from the hadronic tensor calculated on the lattice. We used the
Backus-Gilbert reconstruction method to address the inverse Laplace
transformation for the analytic continuation from the Euclidean to the
Minkowski space.Comment: 8 pages, 5 figures; Proceedings of the 35th International Symposium
on Lattice Field Theory, 18-24 June 2017, Granada, Spai
Dynkin Game of Convertible Bonds and Their Optimal Strategy
This paper studies the valuation and optimal strategy of convertible bonds as
a Dynkin game by using the reflected backward stochastic differential equation
method and the variational inequality method. We first reduce such a Dynkin
game to an optimal stopping time problem with state constraint, and then in a
Markovian setting, we investigate the optimal strategy by analyzing the
properties of the corresponding free boundary, including its position,
asymptotics, monotonicity and regularity. We identify situations when call
precedes conversion, and vice versa. Moreover, we show that the irregular
payoff results in the possibly non-monotonic conversion boundary. Surprisingly,
the price of the convertible bond is not necessarily monotonic in time: it may
even increase when time approaches maturity.Comment: 28 pages, 9 figures in Journal of Mathematical Analysis and
Application, 201
Variance Reduction and Cluster Decomposition
It is a common problem in lattice QCD calculation of the mass of the hadron
with an annihilation channel that the signal falls off in time while the noise
remains constant. In addition, the disconnected insertion calculation of the
three-point function and the calculation of the neutron electric dipole moment
with the term suffer from a noise problem due to the
fluctuation. We identify these problems to have the same origin and the
problem can be overcome by utilizing the cluster decomposition
principle. We demonstrate this by considering the calculations of the glueball
mass, the strangeness content in the nucleon, and the CP violation angle in the
nucleon due to the term. It is found that for lattices with physical
sizes of 4.5 - 5.5 fm, the statistical errors of these quantities can be
reduced by a factor of 3 to 4. The systematic errors can be estimated from the
Akaike information criterion. For the strangeness content, we find that the
systematic error is of the same size as that of the statistical one when the
cluster decomposition principle is utilized. This results in a 2 to 3 times
reduction in the overall error.Comment: 7 pages, 5 figures, appendix added to address the systematic erro
- …