144 research outputs found

    Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding

    Full text link
    Context plays an important role in human language understanding, thus it may also be useful for machines learning vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. We carefully designed experiments to show that neither an autoregressive decoder nor an RNN decoder is required. After that, we designed a model which still keeps an RNN as the encoder, while using a non-autoregressive convolutional decoder. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabelled corpora, and in both cases the transferability is evaluated on a set of downstream NLP tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks

    Rethinking Skip-thought: A Neighborhood based Approach

    Full text link
    We study the skip-thought model with neighborhood information as weak supervision. More specifically, we propose a skip-thought neighbor model to consider the adjacent sentences as a neighborhood. We train our skip-thought neighbor model on a large corpus with continuous sentences, and then evaluate the trained model on 7 tasks, which include semantic relatedness, paraphrase detection, and classification benchmarks. Both quantitative comparison and qualitative investigation are conducted. We empirically show that, our skip-thought neighbor model performs as well as the skip-thought model on evaluation tasks. In addition, we found that, incorporating an autoencoder path in our model didn't aid our model to perform better, while it hurts the performance of the skip-thought model

    Task-phase-specific dynamics of basal forebrain neuronal ensembles.

    Get PDF
    Cortically projecting basal forebrain neurons play a critical role in learning and attention, and their degeneration accompanies age-related impairments in cognition. Despite the impressive anatomical and cell-type complexity of this system, currently available data suggest that basal forebrain neurons lack complexity in their response fields, with activity primarily reflecting only macro-level brain states such as sleep and wake, onset of relevant stimuli and/or reward obtainment. The current study examined the spiking activity of basal forebrain neuron populations across multiple phases of a selective attention task, addressing, in particular, the issue of complexity in ensemble firing patterns across time. Clustering techniques applied to the full population revealed a large number of distinct categories of task-phase-specific activity patterns. Unique population firing-rate vectors defined each task phase and most categories of task-phase-specific firing had counterparts with opposing firing patterns. An analogous set of task-phase-specific firing patterns was also observed in a population of posterior parietal cortex neurons. Thus, consistent with the known anatomical complexity, basal forebrain population dynamics are capable of differentially modulating their cortical targets according to the unique sets of environmental stimuli, motor requirements, and cognitive processes associated with different task phases
    corecore