9,526 research outputs found

    Islands in the grammar? Standards of evidence

    Get PDF
    When considering how a complex system operates, the observable behavior depends upon both architectural properties of the system and the principles governing its operation. As a simple example, the behavior of computer chess programs depends upon both the processing speed and resources of the computer and the programmed rules that determine how the computer selects its next move. Despite having very similar search techniques, a computer from the 1990s might make a move that its 1970s forerunner would overlook simply because it had more raw computational power. From the naïve observer’s perspective, however, it is not superficially evident if a particular move is dispreferred or overlooked because of computational limitations or the search strategy and decision algorithm. In the case of computers, evidence for the source of any particular behavior can ultimately be found by inspecting the code and tracking the decision process of the computer. But with the human mind, such options are not yet available. The preference for certain behaviors and the dispreference for others may theoretically follow from cognitive limitations or from task-related principles that preclude certain kinds of cognitive operations, or from some combination of the two. This uncertainty gives rise to the fundamental problem of finding evidence for one explanation over the other. Such a problem arises in the analysis of syntactic island effects – the focu

    Crossings as a side effect of dependency lengths

    Get PDF
    The syntactic structure of sentences exhibits a striking regularity: dependencies tend to not cross when drawn above the sentence. We investigate two competing explanations. The traditional hypothesis is that this trend arises from an independent principle of syntax that reduces crossings practically to zero. An alternative to this view is the hypothesis that crossings are a side effect of dependency lengths, i.e. sentences with shorter dependency lengths should tend to have fewer crossings. We are able to reject the traditional view in the majority of languages considered. The alternative hypothesis can lead to a more parsimonious theory of language.Comment: the discussion section has been expanded significantly; in press in Complexity (Wiley

    Deep learning for extracting protein-protein interactions from biomedical literature

    Full text link
    State-of-the-art methods for protein-protein interaction (PPI) extraction are primarily feature-based or kernel-based by leveraging lexical and syntactic information. But how to incorporate such knowledge in the recent deep learning methods remains an open question. In this paper, we propose a multichannel dependency-based convolutional neural network model (McDepCNN). It applies one channel to the embedding vector of each word in the sentence, and another channel to the embedding vector of the head of the corresponding word. Therefore, the model can use richer information obtained from different channels. Experiments on two public benchmarking datasets, AIMed and BioInfer, demonstrate that McDepCNN compares favorably to the state-of-the-art rich-feature and single-kernel based methods. In addition, McDepCNN achieves 24.4% relative improvement in F1-score over the state-of-the-art methods on cross-corpus evaluation and 12% improvement in F1-score over kernel-based methods on "difficult" instances. These results suggest that McDepCNN generalizes more easily over different corpora, and is capable of capturing long distance features in the sentences.Comment: Accepted for publication in Proceedings of the 2017 Workshop on Biomedical Natural Language Processing, 10 pages, 2 figures, 6 table

    Tracking Typological Traits of Uralic Languages in Distributed Language Representations

    Get PDF
    Although linguistic typology has a long history, computational approaches have only recently gained popularity. The use of distributed representations in computational linguistics has also become increasingly popular. A recent development is to learn distributed representations of language, such that typologically similar languages are spatially close to one another. Although empirical successes have been shown for such language representations, they have not been subjected to much typological probing. In this paper, we first look at whether this type of language representations are empirically useful for model transfer between Uralic languages in deep neural networks. We then investigate which typological features are encoded in these representations by attempting to predict features in the World Atlas of Language Structures, at various stages of fine-tuning of the representations. We focus on Uralic languages, and find that some typological traits can be automatically inferred with accuracies well above a strong baseline.Comment: Finnish abstract included in the pape
    • …
    corecore