12,322 research outputs found

    Estimation of photon number distribution and derivative characteristics of photon-pair sources

    Full text link
    The evaluation of a photon-pair source employs metrics like photon-pair generation rate, heralding efficiency, and second-order correlation function, all of which are determined by the photon number distribution of the source. These metrics, however, can be altered due to spectral or spatial filtering and optical losses, leading to changes in the metric characteristics. In this paper, we theoretically describe these changes in the photon number distribution and the effect of noise counts. We also review the previous methods used for estimating these characteristics and the photon number distribution. Moreover, we introduce an improved methodology for estimating the photon number distribution, focusing on photon-pair sources, and discuss the accuracy of the calculated characteristics from the estimated (or reconstructed) photon number distribution through simulations and experiments

    Data Augmentation for Spoken Language Understanding via Joint Variational Generation

    Full text link
    Data scarcity is one of the main obstacles of domain adaptation in spoken language understanding (SLU) due to the high cost of creating manually tagged SLU datasets. Recent works in neural text generative models, particularly latent variable models such as variational autoencoder (VAE), have shown promising results in regards to generating plausible and natural sentences. In this paper, we propose a novel generative architecture which leverages the generative power of latent variable models to jointly synthesize fully annotated utterances. Our experiments show that existing SLU models trained on the additional synthetic examples achieve performance gains. Our approach not only helps alleviate the data scarcity issue in the SLU task for many datasets but also indiscriminately improves language understanding performances for various SLU models, supported by extensive experiments and rigorous statistical testing.Comment: 8 pages, 3 figures, 4 tables, Accepted in AAAI201

    Learning to Compose Task-Specific Tree Structures

    Full text link
    For years, recursive neural networks (RvNNs) have been shown to be suitable for representing text into fixed-length vectors and achieved good performance on several natural language processing tasks. However, the main drawback of RvNNs is that they require structured input, which makes data preparation and model implementation hard. In this paper, we propose Gumbel Tree-LSTM, a novel tree-structured long short-term memory architecture that learns how to compose task-specific tree structures only from plain text data efficiently. Our model uses Straight-Through Gumbel-Softmax estimator to decide the parent node among candidates dynamically and to calculate gradients of the discrete decision. We evaluate the proposed model on natural language inference and sentiment analysis, and show that our model outperforms or is at least comparable to previous models. We also find that our model converges significantly faster than other models.Comment: AAAI 201
    • …
    corecore