2,112 research outputs found

    Correlating neural and symbolic representations of language

    Full text link
    Analysis methods which enable us to better understand the representations and functioning of neural models of language are increasingly needed as deep learning becomes the dominant approach in NLP. Here we present two methods based on Representational Similarity Analysis (RSA) and Tree Kernels (TK) which allow us to directly quantify how strongly the information encoded in neural activation patterns corresponds to information represented by symbolic structures such as syntax trees. We first validate our methods on the case of a simple synthetic language for arithmetic expressions with clearly defined syntax and semantics, and show that they exhibit the expected pattern of results. We then apply our methods to correlate neural representations of English sentences with their constituency parse trees.Comment: ACL 201

    Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop

    Full text link
    The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques specifically developed for analyzing and understanding the inner-workings and representations acquired by neural models of language. Approaches included: systematic manipulation of input to neural networks and investigating the impact on their performance, testing whether interpretable knowledge can be decoded from intermediate representations acquired by neural networks, proposing modifications to neural network architectures to make their knowledge state or generated output more explainable, and examining the performance of networks on simplified or formal languages. Here we review a number of representative studies in each category

    Learning language through pictures

    Full text link
    We propose Imaginet, a model of learning visually grounded representations of language from coupled textual and visual input. The model consists of two Gated Recurrent Unit networks with shared word embeddings, and uses a multi-task objective by receiving a textual description of a scene and trying to concurrently predict its visual representation and the next word in the sentence. Mimicking an important aspect of human language learning, it acquires meaning representations for individual words from descriptions of visual scenes. Moreover, it learns to effectively use sequential structure in semantic interpretation of multi-word phrases.Comment: To appear at ACL 201

    Computing Multidimensional Persistence

    Full text link
    The theory of multidimensional persistence captures the topology of a multifiltration -- a multiparameter family of increasing spaces. Multifiltrations arise naturally in the topological analysis of scientific data. In this paper, we give a polynomial time algorithm for computing multidimensional persistence. We recast this computation as a problem within computational algebraic geometry and utilize algorithms from this area to solve it. While the resulting problem is Expspace-complete and the standard algorithms take doubly-exponential time, we exploit the structure inherent withing multifiltrations to yield practical algorithms. We implement all algorithms in the paper and provide statistical experiments to demonstrate their feasibility.Comment: This paper has been withdrawn by the authors. Journal of Computational Geometry, 1(1) 2010, pages 72-100. http://jocg.org/index.php/jocg/article/view/1

    Encoding of phonology in a recurrent neural model of grounded speech

    Full text link
    We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.Comment: Accepted at CoNLL 201

    Learning to Understand Child-directed and Adult-directed Speech

    Full text link
    Speech directed to children differs from adult-directed speech in linguistic aspects such as repetition, word choice, and sentence length, as well as in aspects of the speech signal itself, such as prosodic and phonemic variation. Human language acquisition research indicates that child-directed speech helps language learners. This study explores the effect of child-directed speech when learning to extract semantic information from speech directly. We compare the task performance of models trained on adult-directed speech (ADS) and child-directed speech (CDS). We find indications that CDS helps in the initial stages of learning, but eventually, models trained on ADS reach comparable task performance, and generalize better. The results suggest that this is at least partially due to linguistic rather than acoustic properties of the two registers, as we see the same pattern when looking at models trained on acoustically comparable synthetic speech.Comment: Authors found an error in preprocessing of transcriptions before they were fed to SBERT. After correction, the experiments were rerun. The updated results can be found in this version. Importantly, - Most scores were affected to a small degree (performance was slightly worse). - The effect was consistent across conditions. Therefore, the general patterns remain the sam

    Analyzing analytical methods: The case of phonology in neural models of spoken language

    Full text link
    Given the fast development of analysis techniques for NLP and speech processing systems, few systematic studies have been conducted to compare the strengths and weaknesses of each method. As a step in this direction we study the case of representations of phonology in neural network models of spoken language. We use two commonly applied analytical techniques, diagnostic classifiers and representational similarity analysis, to quantify to what extent neural activation patterns encode phonemes and phoneme sequences. We manipulate two factors that can affect the outcome of analysis. First, we investigate the role of learning by comparing neural activations extracted from trained versus randomly-initialized models. Second, we examine the temporal scope of the activations by probing both local activations corresponding to a few milliseconds of the speech signal, and global activations pooled over the whole utterance. We conclude that reporting analysis results with randomly initialized models is crucial, and that global-scope methods tend to yield more consistent results and we recommend their use as a complement to local-scope diagnostic methods.Comment: ACL 202

    LAPORAN PRAKTIK PENGALAMAN LAPANGAN (PPL) SMA NEGERI 1 WONOSARI

    Get PDF
    Program Praktek Pengajar Lapangan (PPL) merupakan program yang diselenggarakan oleh Universitas Negeri Yogyakarta yang telah dirancang sebagai bagian dari implementasi, pengabdian, tanggung jawab serta loyalitas perguruan tinggi. Adapun tujuan PPL di sekolah ini di antaranya adalah untuk memberikan bekal kepada mahasiswa agar kelak dapat mengenali lingkungan kerja sebelum terjun ke lingkungan kerja yang sesungguhnya. Di samping itu, untuk memberikan pembelajaran kepada mahasiswa tentang mekanisme pengajaran dan proses pembelajaran di sekolah. Dengan PPL ini mahasiswa PPL memperoleh kesempatan menghadapi kondisi riil dalam proses belajar mengajar. Selain itu program ini sangat berguna untuk penguasaan kompetensi keilmuan dan ketrampilan bidang studi, ketrampilan pengembangan profesi, dan kompetensi dalam pembentukan kepribadian sebagai pendidikan yang profesional. Pelaksanaan program PPL dimulai dari tanggal 10 Agustus sampai 12 September 2015. Selama kegiatan, mahasiswa PPL melaksanakan berbagai program kerja yang bertujuan untuk memfasilitasi pengajaran dan pengoptimalan potensi siswa. Pada realisasinya kegiatan berjalan sesuai dengan target yang sudah direncanakan. Kegiatan PPL ini dilaksanakan pada saat proses pembelajaran berlangsung. Program yang diselenggarakan pada kegiatan PPL, disusun untuk meningkatkan proses pengajaran dan proses belajar siswa. Selain itu, juga untuk melatih mahasiswa PPL sebelum terjun ke lapangan kerja nantinya. Dengan demikian, mahasiswa PPL memiliki keterampilan dalam manajerial kelas dan sekolah sehingga kegiatan belajar mengajar dapat berjalan dengan baik dan menghasilkan input dan output yang andal. Adapun program yang telah dilakukan antara lain pembuatan perangkat mengajar, bank soal, dan pembuatan media pembelajaran
    • 

    corecore