106 research outputs found

    D2CSE: Difference-aware Deep continuous prompts for Contrastive Sentence Embeddings

    Full text link
    This paper describes Difference-aware Deep continuous prompt for Contrastive Sentence Embeddings (D2CSE) that learns sentence embeddings. Compared to state-of-the-art approaches, D2CSE computes sentence vectors that are exceptional to distinguish a subtle difference in similar sentences by employing a simple neural architecture for continuous prompts. Unlike existing architectures that require multiple pretrained language models (PLMs) to process a pair of the original and corrupted (subtly modified) sentences, D2CSE avoids cumbersome fine-tuning of multiple PLMs by only optimizing continuous prompts by performing multiple tasks -- i.e., contrastive learning and conditional replaced token detection all done in a self-guided manner. D2CSE overloads a single PLM on continuous prompts and greatly saves memory consumption as a result. The number of training parameters in D2CSE is reduced to about 1\% of existing approaches while substantially improving the quality of sentence embeddings. We evaluate D2CSE on seven Semantic Textual Similarity (STS) benchmarks, using three different metrics, namely, Spearman's rank correlation, recall@K for a retrieval task, and the anisotropy of an embedding space measured in alignment and uniformity. Our empirical results suggest that shallow (not too meticulously devised) continuous prompts can be honed effectively for multiple NLP tasks and lead to improvements upon existing state-of-the-art approaches

    Look at the First Sentence: Position Bias in Question Answering

    Full text link
    Many extractive question answering models are trained to predict start and end positions of answers. The choice of predicting answers as positions is mainly due to its simplicity and effectiveness. In this study, we hypothesize that when the distribution of the answer positions is highly skewed in the training set (e.g., answers lie only in the k-th sentence of each passage), QA models predicting answers as positions can learn spurious positional cues and fail to give answers in different positions. We first illustrate this position bias in popular extractive QA models such as BiDAF and BERT and thoroughly examine how position bias propagates through each layer of BERT. To safely deliver position information without position bias, we train models with various de-biasing methods including entropy regularization and bias ensembling. Among them, we found that using the prior distribution of answer positions as a bias model is very effective at reducing position bias, recovering the performance of BERT from 37.48% to 81.64% when trained on a biased SQuAD dataset.Comment: 13 pages, EMNLP 202

    Near-Complete Teleportation of a Superposed Coherent State

    Full text link
    The four Bell-type entangled coherent states, |\alpha>|-\alpha> \pm |-\alpha> |\alpha> and |\alpha>|\alpha> \pm |-\alpha> |-\alpha>, can be discriminated with a high probability using only linear optical means, as long as |\alpha| is not too small. Based on this observation, we propose a simple scheme to almost completely teleport a superposed coherent state. The nonunitary transformation, that is required to complete the teleportation, can be achieved by embedding the receiver's field state in a larger Hilbert space consisting of the field and a single atom and performing a unitary transformation on this Hilbert space.Comment: 4 pages,3 figures, Two columns, LaTex2

    PhaseAug: A Differentiable Augmentation for Speech Synthesis to Simulate One-to-Many Mapping

    Full text link
    Previous generative adversarial network (GAN)-based neural vocoders are trained to reconstruct the exact ground truth waveform from the paired mel-spectrogram and do not consider the one-to-many relationship of speech synthesis. This conventional training causes overfitting for both the discriminators and the generator, leading to the periodicity artifacts in the generated audio signal. In this work, we present PhaseAug, the first differentiable augmentation for speech synthesis that rotates the phase of each frequency bin to simulate one-to-many mapping. With our proposed method, we outperform baselines without any architecture modification. Code and audio samples will be available at https://github.com/mindslab-ai/phaseaug.Comment: Submitted to ICASSP 202

    Is Cross-modal Information Retrieval Possible without Training?

    Full text link
    Encoded representations from a pretrained deep learning model (e.g., BERT text embeddings, penultimate CNN layer activations of an image) convey a rich set of features beneficial for information retrieval. Embeddings for a particular modality of data occupy a high-dimensional space of its own, but it can be semantically aligned to another by a simple mapping without training a deep neural net. In this paper, we take a simple mapping computed from the least squares and singular value decomposition (SVD) for a solution to the Procrustes problem to serve a means to cross-modal information retrieval. That is, given information in one modality such as text, the mapping helps us locate a semantically equivalent data item in another modality such as image. Using off-the-shelf pretrained deep learning models, we have experimented the aforementioned simple cross-modal mappings in tasks of text-to-image and image-to-text retrieval. Despite simplicity, our mappings perform reasonably well reaching the highest accuracy of 77% on recall@10, which is comparable to those requiring costly neural net training and fine-tuning. We have improved the simple mappings by contrastive learning on the pretrained models. Contrastive learning can be thought as properly biasing the pretrained encoders to enhance the cross-modal mapping quality. We have further improved the performance by multilayer perceptron with gating (gMLP), a simple neural architecture

    Simple Questions Generate Named Entity Recognition Datasets

    Full text link
    Recent named entity recognition (NER) models often rely on human-annotated datasets requiring the vast engagement of professional knowledge on the target domain and entities. This work introduces an ask-to-generate approach, which automatically generates NER datasets by asking simple natural language questions to an open-domain question answering system (e.g., "Which disease?"). Despite using fewer training resources, our models solely trained on the generated datasets largely outperform strong low-resource models by 20.8 F1 score on average across six popular NER benchmarks. Our models also show competitive performance with rich-resource models that additionally leverage in-domain dictionaries provided by domain experts. In few-shot NER, we outperform the previous best model by 5.2 F1 score on three benchmarks and achieve new state-of-the-art performance.Comment: Code available at https://github.com/dmis-lab/GeNE

    Fast frequency discrimination and phoneme recognition using a biomimetic membrane coupled to a neural network

    Full text link
    In the human ear, the basilar membrane plays a central role in sound recognition. When excited by sound, this membrane responds with a frequency-dependent displacement pattern that is detected and identified by the auditory hair cells combined with the human neural system. Inspired by this structure, we designed and fabricated an artificial membrane that produces a spatial displacement pattern in response to an audible signal, which we used to train a convolutional neural network (CNN). When trained with single frequency tones, this system can unambiguously distinguish tones closely spaced in frequency. When instead trained to recognize spoken vowels, this system outperforms existing methods for phoneme recognition, including the discrete Fourier transform (DFT), zoom FFT and chirp z-transform, especially when tested in short time windows. This sound recognition scheme therefore promises significant benefits in fast and accurate sound identification compared to existing methods.Comment: 7 pages, 4 figure
    • …
    corecore