3,777 research outputs found

    A Distributed Method for Optimal Capacity Reservation

    Full text link
    We consider the problem of reserving link capacity in a network in such a way that any of a given set of flow scenarios can be supported. In the optimal capacity reservation problem, we choose the reserved link capacities to minimize the reservation cost. This problem reduces to a large linear program, with the number of variables and constraints on the order of the number of links times the number of scenarios. Small and medium size problems are within the capabilities of generic linear program solvers. We develop a more scalable, distributed algorithm for the problem that alternates between solving (in parallel) one flow problem per scenario, and coordination steps, which connect the individual flows and the reservation capacities

    Deep joint rain and haze removal from single images

    Full text link
    Rain removal from a single image is a challenge which has been studied for a long time. In this paper, a novel convolutional neural network based on wavelet and dark channel is proposed. On one hand, we think that rain streaks correspond to high frequency component of the image. Therefore, haar wavelet transform is a good choice to separate the rain streaks and background to some extent. More specifically, the LL subband of a rain image is more inclined to express the background information, while LH, HL, HH subband tend to represent the rain streaks and the edges. On the other hand, the accumulation of rain streaks from long distance makes the rain image look like haze veil. We extract dark channel of rain image as a feature map in network. By increasing this mapping between the dark channel of input and output images, we achieve haze removal in an indirect way. All of the parameters are optimized by back-propagation. Experiments on both synthetic and real- world datasets reveal that our method outperforms other state-of- the-art methods from a qualitative and quantitative perspective.Comment: 6 page

    RPC: A Large-Scale Retail Product Checkout Dataset

    Full text link
    Over recent years, emerging interest has occurred in integrating computer vision technology into the retail industry. Automatic checkout (ACO) is one of the critical problems in this area which aims to automatically generate the shopping list from the images of the products to purchase. The main challenge of this problem comes from the large scale and the fine-grained nature of the product categories as well as the difficulty for collecting training images that reflect the realistic checkout scenarios due to continuous update of the products. Despite its significant practical and research value, this problem is not extensively studied in the computer vision community, largely due to the lack of a high-quality dataset. To fill this gap, in this work we propose a new dataset to facilitate relevant research. Our dataset enjoys the following characteristics: (1) It is by far the largest dataset in terms of both product image quantity and product categories. (2) It includes single-product images taken in a controlled environment and multi-product images taken by the checkout system. (3) It provides different levels of annotations for the check-out images. Comparing with the existing datasets, ours is closer to the realistic setting and can derive a variety of research problems. Besides the dataset, we also benchmark the performance on this dataset with various approaches. The dataset and related resources can be found at \url{https://rpc-dataset.github.io/}.Comment: Project page: https://rpc-dataset.github.io

    The Higgs-Boson Decay H→ggH\to gg to Order αs5\alpha_s^5 under the mMOM-Scheme

    Full text link
    We study the decay width of the Higgs-boson Hβ†’ggH\to gg up to order Ξ±s5\alpha_s^5 under the minimal momentum space subtraction scheme (mMOM-scheme). To improve the accuracy of perturbative QCD prediction, we adopt the principle of maximum conformality (PMC) to set its renormalization scales. A detailed comparison of the total decay width and the separate decay widths at each perturbative order before and after the PMC scale setting is presented. The PMC adopts the renormalization group equation to fix the optimal scales of the process. After the PMC scale setting, the scale-dependence for both the total and the separate decay widths are greatly suppressed, and the convergence of perturbative QCD series is improved. By taking the Higgs mass MH=125.09Β±0.21Β±0.11M_H=125.09\pm 0.21\pm 0.11 GeV, as recently given by the ATLAS and CMS collaborations, we predict Ξ“(Hβ†’gg)∣mMOM,PMC=339.1Β±1.7βˆ’2.4+4.0\Gamma(H\to gg)|_{\rm mMOM, PMC} = 339.1\pm 1.7^{+4.0}_{-2.4} keV, where the first error is for Higgs mass and the second error is the residual scale dependence by varying the initial scale ΞΌr∈[MH/2,4MH]\mu_r\in[M_H/2,4M_H].Comment: 9 pages, 3 figures. Revised version to be published in J.Phys.

    Recurrent MVSNet for High-resolution Multi-view Stereo Depth Inference

    Full text link
    Deep learning has recently demonstrated its excellent performance for multi-view stereo (MVS). However, one major limitation of current learned MVS approaches is the scalability: the memory-consuming cost volume regularization makes the learned MVS hard to be applied to high-resolution scenes. In this paper, we introduce a scalable multi-view stereo framework based on the recurrent neural network. Instead of regularizing the entire 3D cost volume in one go, the proposed Recurrent Multi-view Stereo Network (R-MVSNet) sequentially regularizes the 2D cost maps along the depth direction via the gated recurrent unit (GRU). This reduces dramatically the memory consumption and makes high-resolution reconstruction feasible. We first show the state-of-the-art performance achieved by the proposed R-MVSNet on the recent MVS benchmarks. Then, we further demonstrate the scalability of the proposed method on several large-scale scenarios, where previous learned approaches often fail due to the memory constraint. Code is available at https://github.com/YoYo000/MVSNet.Comment: Accepted by CVPR201

    QCD corrections to the BcB_c to charmonia semi-leptonic decays

    Full text link
    We present a detailed analysis on the BcB_c meson semi-leptonic decays, Bcβ†’Ξ·c(J/ψ)β„“Ξ½B_c \to \eta_c (J/\psi) \ell \nu, up to next-to-leading order (NLO) QCD correction. We adopt the principle of maximum conformality (PMC) to set the renormalization scales for those decays. After applying the PMC scale setting, we determine the optimal renormalization scale for the Bcβ†’Ξ·c(J/ψ)B_c\to\eta_c(J/\psi) transition form factors (TFFs). Because of the same {Ξ²0}\{\beta_0\}-terms, the optimal PMC scales at the NLO level are the same for all those TFFs, i.e. ΞΌrPMCβ‰ˆ0.8GeV\mu_r^{\rm PMC} \approx 0.8{\rm GeV}. We adopt a strong coupling model from the massive perturbation theory (MPT) to achieve a reliable pQCD estimation in this low energy region. Furthermore, we adopt a monopole form as an extrapolation for the Bcβ†’Ξ·c(J/ψ)B_c\to\eta_c(J/\psi) TFFs to all their allowable q2q^2 region. Then, we predict Ξ“Bcβ†’Ξ·cβ„“Ξ½(β„“=e,ΞΌ)=(71.53βˆ’8.90+11.27)Γ—10βˆ’15GeV\Gamma_{B_c \to \eta_c \ell \nu}(\ell=e,\mu) =(71.53^{+11.27}_{-8.90})\times 10^{-15} {\rm GeV}, Ξ“Bcβ†’Ξ·cτν=(27.14βˆ’4.33+5.93)Γ—10βˆ’15GeV\Gamma_{B_c \to \eta_c \tau \nu}=(27.14^{+5.93}_{-4.33})\times 10^{-15} {\rm GeV}, Ξ“Bcβ†’J/Οˆβ„“Ξ½(β„“=e,ΞΌ)=(106.31βˆ’14.01+18.59)Γ—10βˆ’15GeV\Gamma_{B_c \to J/\psi \ell \nu}(\ell=e,\mu) =(106.31^{+18.59}_{-14.01}) \times 10^{-15} {\rm GeV}, Ξ“Bcβ†’J/ΟˆΟ„Ξ½=(28.25βˆ’4.35+6.02)Γ—10βˆ’15GeV\Gamma_{B_c \to J/\psi \tau \nu} =(28.25^{+6.02}_{-4.35})\times 10^{-15} {\rm GeV}, where the uncertainties are squared averages of all the mentioned error sources. We show that the present prediction of the production cross section times branching ratio for Bc+β†’J/Οˆβ„“+vB^+_c\to J/\psi \ell^+ v relative to that for B+β†’J/ψK+B^+ \to J/\psi K^+, i.e. β„œ(J/Οˆβ„“+Ξ½)\Re(J/\psi \ell^+ \nu), is in a better agreement with CDF measurements than the previous predictions.Comment: 11 pages, 5 figure

    Baryonium

    Full text link
    In the framework of the heavy baryon perturbation theory, in which the two-pion exchange is considered, the physical properties of heavy-baryon-anti-heavy-baryon systems are revisited. The potentials between heavy-baryon and anti-heavy-baryon are extracted in a holonomic form. Based on the extracted potentials, the s-wave scattering phase shifts and scattering lengths of Ξ›c\Lambda_c-Ξ›Λ‰c\bar{\Lambda}_c and Ξ£c\Sigma_c-Ξ£Λ‰c\bar{\Sigma}_c are calculated. From these scattering features, it is found that the Ξ›c\Lambda_c-Ξ›Λ‰c\bar{\Lambda}_c system can be bound only when the value of the coupling constant g2g_2 is larger than that from the decay data of the Ξ£c(Ξ£cβˆ—)β†’Ξ›cΟ€\Sigma_c(\Sigma_c^*) \to \Lambda_c \pi process. The binding condition for the Ξ£c\Sigma_c-Ξ£Λ‰c\bar{\Sigma}_c system is also examined. The binding possibilities of these systems deduced from the scattering calculations are also checked by the bound state calculation and the binding energies are obtained if the system can be really bound. The binding possibility of the Ξ›b\Lambda_b-Ξ›Λ‰b\bar{\Lambda}_b system is investigated as well.Comment: 23 pages, 18 figure

    MSR-net:Low-light Image Enhancement Using Deep Convolutional Network

    Full text link
    Images captured in low-light conditions usually suffer from very low contrast, which increases the difficulty of subsequent computer vision tasks in a great extent. In this paper, a low-light image enhancement model based on convolutional neural network and Retinex theory is proposed. Firstly, we show that multi-scale Retinex is equivalent to a feedforward convolutional neural network with different Gaussian convolution kernels. Motivated by this fact, we consider a Convolutional Neural Network(MSR-net) that directly learns an end-to-end mapping between dark and bright images. Different fundamentally from existing approaches, low-light image enhancement in this paper is regarded as a machine learning problem. In this model, most of the parameters are optimized by back-propagation, while the parameters of traditional models depend on the artificial setting. Experiments on a number of challenging images reveal the advantages of our method in comparison with other state-of-the-art methods from the qualitative and quantitative perspective.Comment: 9page

    Exclusive charmonium production from e+eβˆ’e^+ e^- annihilation round the Z0Z^0 peak

    Full text link
    We make a comparative and comprehensive study on the charmonium exclusive productions at the e+eβˆ’e^+e^- collider with the collision energy either round the Z0Z^0-boson mass for a super ZZ factory or equals to 10.6 GeV for the BB factories as Belle and BABAR. We study the total cross sections for the charmonium production via the exclusive processes e+eβˆ’β†’Ξ³βˆ—/Z0β†’H1+H2e^+e^- \to \gamma^*/Z^0 \to H_{1}+H_{2} and e+eβˆ’β†’Ξ³βˆ—/Z0β†’H1+Ξ³e^+e^- \to \gamma^*/Z^0 \to H_{1} +\gamma, where H1H_{1} and H2H_{2} represent the dominant color-singlet SS-wave and PP-wave charmonium states respectively. Total cross sections versus the e+eβˆ’e^+e^- collision energy s\sqrt{s}, together with their uncertainties, are presented, which clearly show the relative importance of these channels. At the BB factory, the production channels via the virtual Ξ³βˆ—\gamma^* propagator are dominant over the channels via the Z0Z^0 propagator by about four orders. While, at the super ZZ factory, due to the Z0Z^0-boson resonance effect, the Z0Z^0 boson channels shall provide sizable or even dominant contributions in comparison to the channels via the Ξ³βˆ—\gamma^* propagator. Sizable exclusive charmonium events can be produced at the super ZZ factory with high luminocity up to 1036cmβˆ’2sβˆ’110^{36}{\rm cm}^{-2}{\rm s}^{-1}, especially for the channel of e+eβˆ’β†’Z0β†’H1+Ξ³e^+e^- \to Z^0 \to H_{1} +\gamma, e.g. by taking mc=1.50Β±0.20m_c=1.50\pm0.20 GeV, we shall have (5.0βˆ’0.6+0.8)Γ—104(5.0^{+0.8}_{-0.6})\times10^4 J/ψJ/\psi, (7.5βˆ’0.9+1.1)Γ—103(7.5^{+1.1}_{-0.9})\times10^3 Ξ·c\eta_c, (6.2βˆ’1.9+3.3)Γ—103(6.2^{+3.3}_{-1.9})\times10^3 hch_{c}, (3.1βˆ’0.9+1.7)Γ—102(3.1^{+1.7}_{-0.9})\times10^2 Ο‡c0\chi_{c0}, (2.2βˆ’0.4+1.0)Γ—103(2.2^{+1.0}_{-0.4})\times10^3 Ο‡c1\chi_{c1}, and (7.7βˆ’2.4+4.1)Γ—102(7.7^{+4.1}_{-2.4})\times10^2 Ο‡c2\chi_{c2} events by one operation year. Thus, in addition to the BB factories as BABAR and Belle, such a super ZZ factory shall provide another useful platform for studying the heavy quarkonium properties and for testing QCD theories.Comment: 19 pages, 9 figures. References and discussions updated. To be published in Phys.Rev.

    Towards a Mathematical Foundation of Immunology and Amino Acid Chains

    Full text link
    We attempt to set a mathematical foundation of immunology and amino acid chains. To measure the similarities of these chains, a kernel on strings is defined using only the sequence of the chains and a good amino acid substitution matrix (e.g. BLOSUM62). The kernel is used in learning machines to predict binding affinities of peptides to human leukocyte antigens DR (HLA-DR) molecules. On both fixed allele (Nielsen and Lund 2009) and pan-allele (Nielsen et.al. 2010) benchmark databases, our algorithm achieves the state-of-the-art performance. The kernel is also used to define a distance on an HLA-DR allele set based on which a clustering analysis precisely recovers the serotype classifications assigned by WHO (Nielsen and Lund 2009, and Marsh et.al. 2010). These results suggest that our kernel relates well the chain structure of both peptides and HLA-DR molecules to their biological functions, and that it offers a simple, powerful and promising methodology to immunology and amino acid chain studies.Comment: updated on June 25, 201
    • …
    corecore