7,348 research outputs found

    Generalized Second Price Auction with Probabilistic Broad Match

    Full text link
    Generalized Second Price (GSP) auctions are widely used by search engines today to sell their ad slots. Most search engines have supported broad match between queries and bid keywords when executing GSP auctions, however, it has been revealed that GSP auction with the standard broad-match mechanism they are currently using (denoted as SBM-GSP) has several theoretical drawbacks (e.g., its theoretical properties are known only for the single-slot case and full-information setting, and even in this simple setting, the corresponding worst-case social welfare can be rather bad). To address this issue, we propose a novel broad-match mechanism, which we call the Probabilistic Broad-Match (PBM) mechanism. Different from SBM that puts together the ads bidding on all the keywords matched to a given query for the GSP auction, the GSP with PBM (denoted as PBM-GSP) randomly samples a keyword according to a predefined probability distribution and only runs the GSP auction for the ads bidding on this sampled keyword. We perform a comprehensive study on the theoretical properties of the PBM-GSP. Specifically, we study its social welfare in the worst equilibrium, in both full-information and Bayesian settings. The results show that PBM-GSP can generate larger welfare than SBM-GSP under mild conditions. Furthermore, we also study the revenue guarantee for PBM-GSP in Bayesian setting. To the best of our knowledge, this is the first work on broad-match mechanisms for GSP that goes beyond the single-slot case and the full-information setting

    Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input

    Full text link
    Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens from the inputs of the decoder, achieve significantly inference speedup but at the cost of inferior accuracy compared to autoregressive translation (AT) models. Previous work shows that the quality of the inputs of the decoder is important and largely impacts the model accuracy. In this paper, we propose two methods to enhance the decoder inputs so as to improve NAT models. The first one directly leverages a phrase table generated by conventional SMT approaches to translate source tokens to target tokens, which are then fed into the decoder as inputs. The second one transforms source-side word embeddings to target-side word embeddings through sentence-level alignment and word-level adversary learning, and then feeds the transformed word embeddings into the decoder as inputs. Experimental results show our method largely outperforms the NAT baseline~\citep{gu2017non} by 5.115.11 BLEU scores on WMT14 English-German task and 4.724.72 BLEU scores on WMT16 English-Romanian task.Comment: AAAI 201

    Non-Autoregressive Machine Translation with Auxiliary Regularization

    Full text link
    As a new neural machine translation approach, Non-Autoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address these two problems by improving the quality of decoder hidden representations via two auxiliary regularization terms in the training process of an NAT model. First, to make the hidden states more distinguishable, we regularize the similarity between consecutive hidden states based on the corresponding target tokens. Second, to force the hidden states to contain all the information in the source sentence, we leverage the dual nature of translation tasks (e.g., English to German and German to English) and minimize a backward reconstruction error to ensure that the hidden states of the NAT decoder are able to recover the source side sentence. Extensive experiments conducted on several benchmark datasets show that both regularization strategies are effective and can alleviate the issues of repeated translations and incomplete translations in NAT models. The accuracy of NAT models is therefore improved significantly over the state-of-the-art NAT models with even better efficiency for inference.Comment: AAAI 201

    R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object

    Full text link
    Rotation detection is a challenging task due to the difficulties of locating the multi-angle objects and separating them effectively from the background. Though considerable progress has been made, for practical settings, there still exist challenges for rotating objects with large aspect ratio, dense distribution and category extremely imbalance. In this paper, we propose an end-to-end refined single-stage rotation detector for fast and accurate object detection by using a progressive regression approach from coarse to fine granularity. Considering the shortcoming of feature misalignment in existing refined single-stage detector, we design a feature refinement module to improve detection performance by getting more accurate features. The key idea of feature refinement module is to re-encode the position information of the current refined bounding box to the corresponding feature points through pixel-wise feature interpolation to realize feature reconstruction and alignment. For more accurate rotation estimation, an approximate SkewIoU loss is proposed to solve the problem that the calculation of SkewIoU is not derivable. Experiments on three popular remote sensing public datasets DOTA, HRSC2016, UCAS-AOD as well as one scene text dataset ICDAR2015 show the effectiveness of our approach. Tensorflow and Pytorch version codes are available at https://github.com/Thinklab-SJTU/R3Det_Tensorflow and https://github.com/SJTU-Thinklab-Det/r3det-on-mmdetection, and R3Det is also integrated in our open source rotation detection benchmark: https://github.com/yangxue0827/RotationDetection.Comment: 13 pages, 12 figures, 9 table

    Systematic investigation of the rotational bands in nuclei with Z≈100Z \approx 100 using a particle-number conserving method based on a cranked shell model

    Full text link
    The rotational bands in nuclei with Z≈100Z \approx 100 are investigated systematically by using a cranked shell model (CSM) with the pairing correlations treated by a particle-number conserving (PNC) method, in which the blocking effects are taken into account exactly. By fitting the experimental single-particle spectra in these nuclei, a new set of Nilsson parameters (κ\kappa and μ\mu) and deformation parameters (ε2\varepsilon_2 and ε4\varepsilon_4) are proposed. The experimental kinematic moments of inertia for the rotational bands in even-even, odd-AA and odd-odd nuclei, and the bandhead energies of the 1-quasiparticle bands in odd-AA nuclei, are reproduced quite well by the PNC-CSM calculations. By analyzing the ω\omega-dependence of the occupation probability of each cranked Nilsson orbital near the Fermi surface and the contributions of valence orbitals in each major shell to the angular momentum alignment, the upbending mechanism in this region is understood clearly.Comment: 21 pages, 24 figures, extended version of arXiv: 1101.3607 (Phys. Rev. C83, 011304R); added refs.; added Fig. 4 and discussions; Phys. Rev. C, in pres
    • …
    corecore