6,474 research outputs found

    Knowledge-based Biomedical Data Science 2019

    Full text link
    Knowledge-based biomedical data science (KBDS) involves the design and implementation of computer systems that act as if they knew about biomedicine. Such systems depend on formally represented knowledge in computer systems, often in the form of knowledge graphs. Here we survey the progress in the last year in systems that use formally represented knowledge to address data science problems in both clinical and biological domains, as well as on approaches for creating knowledge graphs. Major themes include the relationships between knowledge graphs and machine learning, the use of natural language processing, and the expansion of knowledge-based approaches to novel domains, such as Chinese Traditional Medicine and biodiversity.Comment: Manuscript 43 pages with 3 tables; Supplemental material 43 pages with 3 table

    Lost at home : Jia Zhangke’s journey toward modernity

    Full text link
    In this essay, I take a close look at three of Jia’s films that have prominently engaged the topic of home in relation to place, identity, and nation: Still Life 三峽好人 (2006), 24 City 24城記 (2008), and A Touch of Sin 天註定 (2013). Set at the turn of the twenty-first century, these films employ various modes of representation concerning the reality of space. Still Life, a quiet and contemplative cinematic essay on change and obsolescence, tracks two strangers’ separate journeys to the Three Gorges city of Fengjie as they look for their missing spouses in the disappearing land. 24 City combines real and fictional interviews with three generations of factory workers to offer a sweeping oral history of post-reform China. A Touch of Sin tells four seemingly isolated stories of crime that all culminate into sudden, brutal acts of violence

    ADDRESSING INFORMALITY IN PROCESSING CHINESE MICROTEXT

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Acta Cybernetica : Volume 18. Number 3.

    Get PDF

    Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models

    Full text link
    This paper presents a comprehensive survey of ChatGPT and GPT-4, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains. Indeed, key innovations such as large-scale pre-training that captures knowledge across the entire world wide web, instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) have played significant roles in enhancing LLMs' adaptability and performance. We performed an in-depth analysis of 194 relevant papers on arXiv, encompassing trend analysis, word cloud representation, and distribution analysis across various application domains. The findings reveal a significant and increasing interest in ChatGPT/GPT-4 research, predominantly centered on direct natural language processing applications, while also demonstrating considerable potential in areas ranging from education and history to mathematics, medicine, and physics. This study endeavors to furnish insights into ChatGPT's capabilities, potential implications, ethical concerns, and offer direction for future advancements in this field.Comment: 35 pages, 3 figure
    corecore