12 research outputs found

    PaperRobot: Incremental Draft Generation of Scientific Ideas

    Full text link
    We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper. Turing Tests, where a biomedical domain expert is asked to compare a system output and a human-authored string, show PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively.Comment: 12 pages. Accepted by ACL 2019 Code and resource is available at https://github.com/EagleW/PaperRobo

    Kampanye Antikorupsi Kaum Muda melalui Media Sosial Twitter

    Get PDF
    Kaum Muda masih dinilai sebagai kelompok yang apatis di berbagai Negara termasuk di Indonesia. Situasi ini dikarenakan kurangnya ruang partisipatif yang mengakomodir kepentingan Kaum Muda. Kehadiran media sosial perlahan memberikan ruang partisipasi kreatif baru bagi Kaum Muda. Penelitian ini bertujuan untuk menganalisis penggunaan media sosial Twitter sebagai media partisipasi kreatif dan ekspresi politik Kaum Muda melawan korupsi di Indonesia. Penelitian ini menggunakan pendekatan kuantitatif dengan sumber data berasal dari studi dokumen dan media sosial Twitter. Data dikumpulkan menggunakan fitur Ncapture for Nvivo. Analisis penelitian dilakukan dengan pengkodean data, analisis konten dan visualisasi data menggunakan Software analytics Nvivo 12 Plus. Hasil penelitian ini menjelaskan bahwa media sosial Twitter memiliki pengaruh pada minat kolektif Kaum Muda pada wacana politik khususnya masalah korupsi. Ekspresi Kaum Muda di Twitter dibuktikan dengan ide kreativitas seperti meme, capture, caption, quote, dan hastag. Kreativitas tersebut merupakan bentuk ekspresi politik yang sekaligus mampu memengaruhi dan memobilisasi pengguna media sosial lainnya untuk ikut terlibat pada minat kolektif bersama melawan korupsi. Substansi penelitian ini memberikan kontribusi berupa rekomendasi konsep baru dalam mengampanyekan isu-isu antikorupsi di Indonesia dengan memaksimalkan penggunaan media sosial Twitter

    Beyond Generic: Enhancing Image Captioning with Real-World Knowledge using Vision-Language Pre-Training Model

    Full text link
    Current captioning approaches tend to generate correct but "generic" descriptions that lack real-world knowledge, e.g., named entities and contextual information. Considering that Vision-Language Pre-Training (VLP) models master massive such knowledge from large-scale web-harvested data, it is promising to utilize the generalizability of VLP models to incorporate knowledge into image descriptions. However, using VLP models faces challenges: zero-shot inference suffers from knowledge hallucination that leads to low-quality descriptions, but the generic bias in downstream task fine-tuning hinders the VLP model from expressing knowledge. To address these concerns, we propose a simple yet effective method called Knowledge-guided Replay (K-Replay), which enables the retention of pre-training knowledge during fine-tuning. Our approach consists of two parts: (1) a knowledge prediction task on automatically collected replay exemplars to continuously awaken the VLP model's memory about knowledge, thus preventing the model from collapsing into the generic pattern; (2) a knowledge distillation constraint to improve the faithfulness of generated descriptions hence alleviating the knowledge hallucination. To evaluate knowledge-enhanced descriptions, we construct a novel captioning benchmark KnowCap, containing knowledge of landmarks, famous brands, special foods and movie characters. Experimental results show that our approach effectively incorporates knowledge into descriptions, outperforming strong VLP baseline by 20.9 points (78.7->99.6) in CIDEr score and 20.5 percentage points (34.0%->54.5%) in knowledge recognition accuracy. Our code and data is available at https://github.com/njucckevin/KnowCap.Comment: Accepted at ACM Multimedia (ACMMM) 202
    corecore