10,513 research outputs found

    Model Latent Dirichlet Allocation Pada Perilaku Siswa Menggunakan Media Pembelajaran Daring

    Get PDF
    Abstrak: Indonesia saat ini sedang dihebohkan dengan yang namanya sekolah Daring. Dimana yang seharusnya sekolah adalah tempat untuk guru dan siswa mengajarkan ilmu dari pendidikan hingga perilaku secara tatap muka dan sekarang karena keadaan yang tidak bisa dihindari maka harus dilakukannya pembelajaran secara online yaitu dengan alat perantara. Permasalahan diambil dari banyaknya siswa sudah mempunyai alat komunikasi yaitu handphone dan berbagai media sosial yang sudah dikuasai seperti Instagram. Dengan maksud untuk menganalisa siswa khususnya di Indonesia, sikap apa yang diambil ketika siswa menggunakan Instagram ketika sedang berlangsungnya pembelajaran secara daring. Didapatkan hasil ketika melakukan Teknik crawling data untuk mendapatkan teks atau caption dari penggunaan hashtag sekolah daring yaitu 120 post dalam keadaan sudah terseleksi dari yang bukan post dari siswa. Bentuk analisa untuk pengolahan data yang sudah didapat menggunakan model Latent Dirichlet Allocation (LDA) yaitu untuk menemukan topik yang mendominasi dari hashtag yang digunakan dengan penambahan fitur Stopword untuk kata yang tidak diperlukan. Hasil akhir dari analisa tersebut terdapat 4 topik yang dominan dan dimayoritasi oleh siswa yang mendapatkan penugasan dari sekolah seperti pelajaran biologi.   Kata kunci: Instagram, Latent Dirichlet Allocation (LDA), Pembelajaran Daring,   Abstract: Indonesia is currently being shocked by the named school Online. Where the school should be a place for teachers and students to teach knowledge from education to face-to-face behavior and now because of circumstances that cannot be avoided,learning must be carried out online, namely with an intermediary tool. The problem is taken from the number of students who already have communication tools, namely mobile phones and various social media that have been mastered such as Instagram. With a view to analyzing students, especially in Indonesia, what attitudes are taken when students use Instagram when learning is taking place online. Obtained results when performing techniques crawling data to get text or captions from the use hashtags, of online school namely 120 posts in a selected state from non- posts student. The form of analysis for processing the data that has been obtained uses the model, Latent Dirichlet Allocation (LDA) which is to find the dominant topic of the hashtags used by adding the feature Stopword for unnecessary words. The final result of the analysis, there are 4 topics that are dominant and are majored by students who get assignments from schools such as biology lessons.   Keywords: E-Learning, Instagram, Latent Dirichlet Allocation (LDA)

    Deep Belief Nets for Topic Modeling

    Get PDF
    Applying traditional collaborative filtering to digital publishing is challenging because user data is very sparse due to the high volume of documents relative to the number of users. Content based approaches, on the other hand, is attractive because textual content is often very informative. In this paper we describe large-scale content based collaborative filtering for digital publishing. To solve the digital publishing recommender problem we compare two approaches: latent Dirichlet allocation (LDA) and deep belief nets (DBN) that both find low-dimensional latent representations for documents. Efficient retrieval can be carried out in the latent representation. We work both on public benchmarks and digital media content provided by Issuu, an online publishing platform. This article also comes with a newly developed deep belief nets toolbox for topic modeling tailored towards performance evaluation of the DBN model and comparisons to the LDA model.Comment: Accepted to the ICML-2014 Workshop on Knowledge-Powered Deep Learning for Text Minin

    Inferring Concept Prerequisite Relations from Online Educational Resources

    Full text link
    The Internet has rich and rapidly increasing sources of high quality educational content. Inferring prerequisite relations between educational concepts is required for modern large-scale online educational technology applications such as personalized recommendations and automatic curriculum creation. We present PREREQ, a new supervised learning method for inferring concept prerequisite relations. PREREQ is designed using latent representations of concepts obtained from the Pairwise Latent Dirichlet Allocation model, and a neural network based on the Siamese network architecture. PREREQ can learn unknown concept prerequisites from course prerequisites and labeled concept prerequisite data. It outperforms state-of-the-art approaches on benchmark datasets and can effectively learn from very less training data. PREREQ can also use unlabeled video playlists, a steadily growing source of training data, to learn concept prerequisites, thus obviating the need for manual annotation of course prerequisites.Comment: Accepted at the AAAI Conference on Innovative Applications of Artificial Intelligence (IAAI-19

    Latent dirichlet markov allocation for sentiment analysis

    Get PDF
    In recent years probabilistic topic models have gained tremendous attention in data mining and natural language processing research areas. In the field of information retrieval for text mining, a variety of probabilistic topic models have been used to analyse content of documents. A topic model is a generative model for documents, it specifies a probabilistic procedure by which documents can be generated. All topic models share the idea that documents are mixture of topics, where a topic is a probability distribution over words. In this paper we describe Latent Dirichlet Markov Allocation Model (LDMA), a new generative probabilistic topic model, based on Latent Dirichlet Allocation (LDA) and Hidden Markov Model (HMM), which emphasizes on extracting multi-word topics from text data. LDMA is a four-level hierarchical Bayesian model where topics are associated with documents, words are associated with topics and topics in the model can be presented with single- or multi-word terms. To evaluate performance of LDMA, we report results in the field of aspect detection in sentiment analysis, comparing to the basic LDA model

    Sparse Stochastic Inference for Latent Dirichlet allocation

    Full text link
    We present a hybrid algorithm for Bayesian topic models that combines the efficiency of sparse Gibbs sampling with the scalability of online stochastic inference. We used our algorithm to analyze a corpus of 1.2 million books (33 billion words) with thousands of topics. Our approach reduces the bias of variational inference and generalizes to many Bayesian hidden-variable models.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012
    • …
    corecore