3,843 research outputs found
A Distributed Method for Optimal Capacity Reservation
We consider the problem of reserving link capacity in a network in such a way
that any of a given set of flow scenarios can be supported. In the optimal
capacity reservation problem, we choose the reserved link capacities to
minimize the reservation cost. This problem reduces to a large linear program,
with the number of variables and constraints on the order of the number of
links times the number of scenarios. Small and medium size problems are within
the capabilities of generic linear program solvers. We develop a more scalable,
distributed algorithm for the problem that alternates between solving (in
parallel) one flow problem per scenario, and coordination steps, which connect
the individual flows and the reservation capacities
Deep joint rain and haze removal from single images
Rain removal from a single image is a challenge which has been studied for a
long time. In this paper, a novel convolutional neural network based on wavelet
and dark channel is proposed. On one hand, we think that rain streaks
correspond to high frequency component of the image. Therefore, haar wavelet
transform is a good choice to separate the rain streaks and background to some
extent. More specifically, the LL subband of a rain image is more inclined to
express the background information, while LH, HL, HH subband tend to represent
the rain streaks and the edges. On the other hand, the accumulation of rain
streaks from long distance makes the rain image look like haze veil. We extract
dark channel of rain image as a feature map in network. By increasing this
mapping between the dark channel of input and output images, we achieve haze
removal in an indirect way. All of the parameters are optimized by
back-propagation. Experiments on both synthetic and real- world datasets reveal
that our method outperforms other state-of- the-art methods from a qualitative
and quantitative perspective.Comment: 6 page
RPC: A Large-Scale Retail Product Checkout Dataset
Over recent years, emerging interest has occurred in integrating computer
vision technology into the retail industry. Automatic checkout (ACO) is one of
the critical problems in this area which aims to automatically generate the
shopping list from the images of the products to purchase. The main challenge
of this problem comes from the large scale and the fine-grained nature of the
product categories as well as the difficulty for collecting training images
that reflect the realistic checkout scenarios due to continuous update of the
products. Despite its significant practical and research value, this problem is
not extensively studied in the computer vision community, largely due to the
lack of a high-quality dataset. To fill this gap, in this work we propose a new
dataset to facilitate relevant research. Our dataset enjoys the following
characteristics: (1) It is by far the largest dataset in terms of both product
image quantity and product categories. (2) It includes single-product images
taken in a controlled environment and multi-product images taken by the
checkout system. (3) It provides different levels of annotations for the
check-out images. Comparing with the existing datasets, ours is closer to the
realistic setting and can derive a variety of research problems. Besides the
dataset, we also benchmark the performance on this dataset with various
approaches. The dataset and related resources can be found at
\url{https://rpc-dataset.github.io/}.Comment: Project page: https://rpc-dataset.github.io
The Higgs-Boson Decay to Order under the mMOM-Scheme
We study the decay width of the Higgs-boson up to order
under the minimal momentum space subtraction scheme (mMOM-scheme).
To improve the accuracy of perturbative QCD prediction, we adopt the principle
of maximum conformality (PMC) to set its renormalization scales. A detailed
comparison of the total decay width and the separate decay widths at each
perturbative order before and after the PMC scale setting is presented. The PMC
adopts the renormalization group equation to fix the optimal scales of the
process. After the PMC scale setting, the scale-dependence for both the total
and the separate decay widths are greatly suppressed, and the convergence of
perturbative QCD series is improved. By taking the Higgs mass GeV, as recently given by the ATLAS and CMS collaborations, we
predict keV,
where the first error is for Higgs mass and the second error is the residual
scale dependence by varying the initial scale .Comment: 9 pages, 3 figures. Revised version to be published in J.Phys.
Recurrent MVSNet for High-resolution Multi-view Stereo Depth Inference
Deep learning has recently demonstrated its excellent performance for
multi-view stereo (MVS). However, one major limitation of current learned MVS
approaches is the scalability: the memory-consuming cost volume regularization
makes the learned MVS hard to be applied to high-resolution scenes. In this
paper, we introduce a scalable multi-view stereo framework based on the
recurrent neural network. Instead of regularizing the entire 3D cost volume in
one go, the proposed Recurrent Multi-view Stereo Network (R-MVSNet)
sequentially regularizes the 2D cost maps along the depth direction via the
gated recurrent unit (GRU). This reduces dramatically the memory consumption
and makes high-resolution reconstruction feasible. We first show the
state-of-the-art performance achieved by the proposed R-MVSNet on the recent
MVS benchmarks. Then, we further demonstrate the scalability of the proposed
method on several large-scale scenarios, where previous learned approaches
often fail due to the memory constraint. Code is available at
https://github.com/YoYo000/MVSNet.Comment: Accepted by CVPR201
QCD corrections to the to charmonia semi-leptonic decays
We present a detailed analysis on the meson semi-leptonic decays, , up to next-to-leading order (NLO) QCD
correction. We adopt the principle of maximum conformality (PMC) to set the
renormalization scales for those decays. After applying the PMC scale setting,
we determine the optimal renormalization scale for the
transition form factors (TFFs). Because of the same -terms, the
optimal PMC scales at the NLO level are the same for all those TFFs, i.e.
. We adopt a strong coupling model from
the massive perturbation theory (MPT) to achieve a reliable pQCD estimation in
this low energy region. Furthermore, we adopt a monopole form as an
extrapolation for the TFFs to all their allowable
region. Then, we predict , , , , where the uncertainties are squared averages of all the
mentioned error sources. We show that the present prediction of the production
cross section times branching ratio for relative to
that for , i.e. , is in a better
agreement with CDF measurements than the previous predictions.Comment: 11 pages, 5 figure
Baryonium
In the framework of the heavy baryon perturbation theory, in which the
two-pion exchange is considered, the physical properties of
heavy-baryon-anti-heavy-baryon systems are revisited. The potentials between
heavy-baryon and anti-heavy-baryon are extracted in a holonomic form. Based on
the extracted potentials, the s-wave scattering phase shifts and scattering
lengths of - and - are
calculated. From these scattering features, it is found that the
- system can be bound only when the value of the
coupling constant is larger than that from the decay data of the
process. The binding condition for the
- system is also examined. The binding possibilities
of these systems deduced from the scattering calculations are also checked by
the bound state calculation and the binding energies are obtained if the system
can be really bound. The binding possibility of the
- system is investigated as well.Comment: 23 pages, 18 figure
MSR-net:Low-light Image Enhancement Using Deep Convolutional Network
Images captured in low-light conditions usually suffer from very low
contrast, which increases the difficulty of subsequent computer vision tasks in
a great extent. In this paper, a low-light image enhancement model based on
convolutional neural network and Retinex theory is proposed. Firstly, we show
that multi-scale Retinex is equivalent to a feedforward convolutional neural
network with different Gaussian convolution kernels. Motivated by this fact, we
consider a Convolutional Neural Network(MSR-net) that directly learns an
end-to-end mapping between dark and bright images. Different fundamentally from
existing approaches, low-light image enhancement in this paper is regarded as a
machine learning problem. In this model, most of the parameters are optimized
by back-propagation, while the parameters of traditional models depend on the
artificial setting. Experiments on a number of challenging images reveal the
advantages of our method in comparison with other state-of-the-art methods from
the qualitative and quantitative perspective.Comment: 9page
Exclusive charmonium production from annihilation round the peak
We make a comparative and comprehensive study on the charmonium exclusive
productions at the collider with the collision energy either round the
-boson mass for a super factory or equals to 10.6 GeV for the
factories as Belle and BABAR. We study the total cross sections for the
charmonium production via the exclusive processes and , where and
represent the dominant color-singlet -wave and -wave charmonium
states respectively. Total cross sections versus the collision energy
, together with their uncertainties, are presented, which clearly
show the relative importance of these channels. At the factory, the
production channels via the virtual propagator are dominant over the
channels via the propagator by about four orders. While, at the super
factory, due to the -boson resonance effect, the boson channels
shall provide sizable or even dominant contributions in comparison to the
channels via the propagator. Sizable exclusive charmonium events can
be produced at the super factory with high luminocity up to , especially for the channel of , e.g. by taking GeV, we shall have
,
, ,
,
, and events by one
operation year. Thus, in addition to the factories as BABAR and Belle, such
a super factory shall provide another useful platform for studying the
heavy quarkonium properties and for testing QCD theories.Comment: 19 pages, 9 figures. References and discussions updated. To be
published in Phys.Rev.
Towards a Mathematical Foundation of Immunology and Amino Acid Chains
We attempt to set a mathematical foundation of immunology and amino acid
chains. To measure the similarities of these chains, a kernel on strings is
defined using only the sequence of the chains and a good amino acid
substitution matrix (e.g. BLOSUM62). The kernel is used in learning machines to
predict binding affinities of peptides to human leukocyte antigens DR (HLA-DR)
molecules. On both fixed allele (Nielsen and Lund 2009) and pan-allele (Nielsen
et.al. 2010) benchmark databases, our algorithm achieves the state-of-the-art
performance. The kernel is also used to define a distance on an HLA-DR allele
set based on which a clustering analysis precisely recovers the serotype
classifications assigned by WHO (Nielsen and Lund 2009, and Marsh et.al. 2010).
These results suggest that our kernel relates well the chain structure of both
peptides and HLA-DR molecules to their biological functions, and that it offers
a simple, powerful and promising methodology to immunology and amino acid chain
studies.Comment: updated on June 25, 201
- β¦