219 research outputs found

    Dependency Parsing with Dilated Iterated Graph CNNs

    Full text link
    Dependency parses are an effective way to inject linguistic knowledge into many downstream tasks, and many practitioners wish to efficiently parse sentences at scale. Recent advances in GPU hardware have enabled neural networks to achieve significant gains over the previous best models, these models still fail to leverage GPUs' capability for massive parallelism due to their requirement of sequential processing of the sentence. In response, we propose Dilated Iterated Graph Convolutional Neural Networks (DIG-CNNs) for graph-based dependency parsing, a graph convolutional architecture that allows for efficient end-to-end GPU parsing. In experiments on the English Penn TreeBank benchmark, we show that DIG-CNNs perform on par with some of the best neural network parsers.Comment: 2nd Workshop on Structured Prediction for Natural Language Processing (at EMNLP '17

    Modeling the Spread of Biologically-Inspired Internet Worms

    Get PDF
    Infections by malicious software, such as Internet worms, spreading on computer networks can have devastating consequences, resulting in loss of information, time, and money. To better understand how these worms spread, and thus how to more effectively limit future infections, we apply the household model from epidemiology to simulate the proliferation of adaptive and non-adaptive preference-scanning worms, which take advantage of biologically-inspired strategies. From scans of the actual distribution of Web servers on the Internet, we find that vulnerable machines seem to be highly clustered in Internet Protocol version 4 (IPv4) address space, and our simulations suggest that this organization fosters the quick and comprehensive proliferation of preference-scanning Internet worms

    Learning Dynamic Feature Selection for Fast Sequential Prediction

    Full text link
    We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequence. Our approach is simpler and better suited to NLP than other related cascade methods. We present experiments in left-to-right part-of-speech tagging, named entity recognition, and transition-based dependency parsing. On the typical benchmarking datasets we can preserve POS tagging accuracy above 97% and parsing LAS above 88.5% both with over a five-fold reduction in run-time, and NER F1 above 88 with more than 2x increase in speed.Comment: Appears in The 53rd Annual Meeting of the Association for Computational Linguistics, Beijing, China, July 201

    Psicologia social

    Get PDF
    Visió de la sociolingüística catalana actual des d'una perspectiva psicosocialOverview of current research in Catalan sociolinguistics from a sociopsychological perspectiveVisión de la sociolinguística catalana actual des de una prespectiva psicosocia
    corecore