663 research outputs found

    Twenty-first century corpus workbench: Updating a query architecture for the new millenium.

    Get PDF
    Abstract Corpus Workbench (CWB) is a widely-used architecture for corpus analysis, originally designed at the IMS, University of Stuttgart This paper details recent work to update CWB for the new century. Perhaps the most significant development is that CWB version 3 is now an open source project, licensed under the GNU General Public Licence. This change has substantially enlarged the community of developers and users and has enabled us to leverage existing open-source libraries in extending CWB's capabilities. As a result, several key improvements were made to the CWB core: (i) support for multiple character sets, most especially Unicode (in the form of UTF-8), allowing all the world's writing systems to be utilised within a CWB-indexed corpus; (ii) support for powerful Perl-style regular expressions in CQP queries, based on the open-source PCRE library; (iii) support for a wider range of OS platforms including Mac OS X, Linux, and Windows; and (iv) support for larger corpus sizes of up to 2 billion words on 64-bit platforms. Outside the CWB core, a key concern is the user-friendliness of the interface. CQP itself can be daunting for beginners. However, it is common for access to CQP queries to be provided via a web-interface, supported in CWB version 3 by several Perl modules that give easy access to different facets of CWB/CQP functionality. The CQPweb front-end (Hardie forthcoming) has now been adopted as an integral component of CWB. CQPweb provides analysis options beyond concordancing (such as collocations, frequency lists, and keywords) by using a MySQL database alongside CQP. Available in both the Perl interface and CQPweb is the Common Elementary Query Language (CEQL), a simple-syntax set of search patterns and wildcards which puts much of the power of CQP in a form accessible to beginning students and non-corpus-linguists. The paper concludes with a roadmap for future development of the CWB (version 4 and above), with a focus on even larger corpora, full support for XML and dependency annotation, new types of query languages, and improved efficiency of complex CQP queries. All interested users are invited to help us shape the future of CWB by discussing requirements and contributing to the implementation of these features

    ์•ฝ๋ฌผ ๊ฐ์‹œ๋ฅผ ์œ„ํ•œ ๋น„์ •ํ˜• ํ…์ŠคํŠธ ๋‚ด ์ž„์ƒ ์ •๋ณด ์ถ”์ถœ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์‘์šฉ๋ฐ”์ด์˜ค๊ณตํ•™๊ณผ, 2023. 2. ์ดํ˜•๊ธฐ.Pharmacovigilance is a scientific activity to detect, evaluate and understand the occurrence of adverse drug events or other problems related to drug safety. However, concerns have been raised over the quality of drug safety information for pharmacovigilance, and there is also a need to secure a new data source to acquire drug safety information. On the other hand, the rise of pre-trained language models based on a transformer architecture has accelerated the application of natural language processing (NLP) techniques in diverse domains. In this context, I tried to define two problems in pharmacovigilance as an NLP task and provide baseline models for the defined tasks: 1) extracting comprehensive drug safety information from adverse drug events narratives reported through a spontaneous reporting system (SRS) and 2) extracting drug-food interaction information from abstracts of biomedical articles. I developed annotation guidelines and performed manual annotation, demonstrating that strong NLP models can be trained to extracted clinical information from unstructrued free-texts by fine-tuning transformer-based language models on a high-quality annotated corpus. Finally, I discuss issues to consider when when developing annotation guidelines for extracting clinical information related to pharmacovigilance. The annotated corpora and the NLP models in this dissertation can streamline pharmacovigilance activities by enhancing the data quality of reported drug safety information and expanding the data sources.์•ฝ๋ฌผ ๊ฐ์‹œ๋Š” ์•ฝ๋ฌผ ๋ถ€์ž‘์šฉ ๋˜๋Š” ์•ฝ๋ฌผ ์•ˆ์ „์„ฑ๊ณผ ๊ด€๋ จ๋œ ๋ฌธ์ œ์˜ ๋ฐœ์ƒ์„ ๊ฐ์ง€, ํ‰๊ฐ€ ๋ฐ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•œ ๊ณผํ•™์  ํ™œ๋™์ด๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์•ฝ๋ฌผ ๊ฐ์‹œ์— ์‚ฌ์šฉ๋˜๋Š” ์˜์•ฝํ’ˆ ์•ˆ์ „์„ฑ ์ •๋ณด์˜ ๋ณด๊ณ  ํ’ˆ์งˆ์— ๋Œ€ํ•œ ์šฐ๋ ค๊ฐ€ ๊พธ์ค€ํžˆ ์ œ๊ธฐ๋˜์—ˆ์œผ๋ฉฐ, ํ•ด๋‹น ๋ณด๊ณ  ํ’ˆ์งˆ์„ ๋†’์ด๊ธฐ ์œ„ํ•ด์„œ๋Š” ์•ˆ์ „์„ฑ ์ •๋ณด๋ฅผ ํ™•๋ณดํ•  ์ƒˆ๋กœ์šด ์ž๋ฃŒ์›์ด ํ•„์š”ํ•˜๋‹ค. ํ•œํŽธ ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ ์–ธ์–ด๋ชจ๋ธ์ด ๋“ฑ์žฅํ•˜๋ฉด์„œ ๋‹ค์–‘ํ•œ ๋„๋ฉ”์ธ์—์„œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๊ธฐ์ˆ  ์ ์šฉ์ด ๊ฐ€์†ํ™”๋˜์—ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋งฅ๋ฝ์—์„œ ๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์—์„œ๋Š” ์•ฝ๋ฌผ ๊ฐ์‹œ๋ฅผ ์œ„ํ•œ ๋‹ค์Œ 2๊ฐ€์ง€ ์ •๋ณด ์ถ”์ถœ ๋ฌธ์ œ๋ฅผ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ฌธ์ œ ํ˜•ํƒœ๋กœ ์ •์˜ํ•˜๊ณ  ๊ด€๋ จ ๊ธฐ์ค€ ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค: 1) ์ˆ˜๋™์  ์•ฝ๋ฌผ ๊ฐ์‹œ ์ฒด๊ณ„์— ๋ณด๊ณ ๋œ ์ด์ƒ์‚ฌ๋ก€ ์„œ์ˆ ์ž๋ฃŒ์—์„œ ํฌ๊ด„์ ์ธ ์•ฝ๋ฌผ ์•ˆ์ „์„ฑ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•œ๋‹ค. 2) ์˜๋ฌธ ์˜์•ฝํ•™ ๋…ผ๋ฌธ ์ดˆ๋ก์—์„œ ์•ฝ๋ฌผ-์‹ํ’ˆ ์ƒํ˜ธ์ž‘์šฉ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์•ˆ์ „์„ฑ ์ •๋ณด ์ถ”์ถœ์„ ์œ„ํ•œ ์–ด๋…ธํ…Œ์ด์…˜ ๊ฐ€์ด๋“œ๋ผ์ธ์„ ๊ฐœ๋ฐœํ•˜๊ณ  ์ˆ˜์ž‘์—…์œผ๋กœ ์–ด๋…ธํ…Œ์ด์…˜์„ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ๊ณ ํ’ˆ์งˆ์˜ ์ž์—ฐ์–ด ํ•™์Šต๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์ „ํ•™์Šต ์–ธ์–ด๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•จ์œผ๋กœ์จ ๋น„์ •ํ˜• ํ…์ŠคํŠธ์—์„œ ์ž„์ƒ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š” ๊ฐ•๋ ฅํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ชจ๋ธ ๊ฐœ๋ฐœ์ด ๊ฐ€๋Šฅํ•จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์—์„œ๋Š” ์•ฝ๋ฌผ๊ฐ์‹œ์™€ ๊ด€๋ จ๋œ์ž„์ƒ ์ •๋ณด ์ถ”์ถœ์„ ์œ„ํ•œ ์–ด๋…ธํ…Œ์ด์…˜ ๊ฐ€์ด๋“œ๋ผ์ธ์„ ๊ฐœ๋ฐœํ•  ๋•Œ ๊ณ ๋ คํ•ด์•ผ ํ•  ์ฃผ์˜ ์‚ฌํ•ญ์— ๋Œ€ํ•ด ๋…ผ์˜ํ•˜์˜€๋‹ค. ๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์—์„œ ์†Œ๊ฐœํ•œ ์ž์—ฐ์–ด ํ•™์Šต๋ฐ์ดํ„ฐ์™€ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ชจ๋ธ์€ ์•ฝ๋ฌผ ์•ˆ์ „์„ฑ ์ •๋ณด์˜ ๋ณด๊ณ  ํ’ˆ์งˆ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ณ  ์ž๋ฃŒ์›์„ ํ™•์žฅํ•˜์—ฌ ์•ฝ๋ฌผ ๊ฐ์‹œ ํ™œ๋™์„ ๋ณด์กฐํ•  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค.Chapter 1 1 1.1 Contributions of this dissertation 2 1.2 Overview of this dissertation 2 1.3 Other works 3 Chapter 2 4 2.1 Pharmacovigilance 4 2.2 Biomedical NLP for pharmacovigilance 6 2.2.1 Pre-trained language models 6 2.2.2 Corpora to extract clinical information for pharmacovigilance 9 Chapter 3 11 3.1 Motivation 12 3.2 Proposed Methods 14 3.2.1 Data source and text corpus 15 3.2.2 Annotation of ADE narratives 16 3.2.3 Quality control of annotation 17 3.2.4 Pretraining KAERS-BERT 18 3.2.6 Named entity recognition 20 3.2.7 Entity label classification and sentence extraction 21 3.2.8 Relation extraction 21 3.2.9 Model evaluation 22 3.2.10 Ablation experiment 23 3.3 Results 24 3.3.1 Annotated ICSRs 24 3.3.2 Corpus statistics 26 3.3.3 Performance of NLP models to extract drug safety information 28 3.3.4 Ablation experiment 31 3.4 Discussion 33 3.5 Conclusion 38 Chapter 4 39 4.1 Motivation 39 4.2 Proposed Methods 43 4.2.1 Data source 44 4.2.2 Annotation 45 4.2.3 Quality control of annotation 49 4.2.4 Baseline model development 49 4.3 Results 50 4.3.1 Corpus statistics 50 4.3.2 Annotation Quality 54 4.3.3 Performance of baseline models 55 4.3.4 Qualitative error analysis 56 4.4 Discussion 59 4.5 Conclusion 63 Chapter 5 64 5.1 Issues around defining a word entity 64 5.2 Issues around defining a relation between word entities 66 5.3 Issues around defining entity labels 68 5.4 Issues around selecting and preprocessing annotated documents 68 Chapter 6 71 6.1 Dissertation summary 71 6.2 Limitation and future works 72 6.2.1 Development of end-to-end information extraction models from free-texts to database based on existing structured information 72 6.2.2 Application of in-context learning framework in clinical information extraction 74 Chapter 7 76 7.1 Annotation Guideline for "Extraction of Comprehensive Drug Safety Information from Adverse Event Narratives Reported through Spontaneous Reporting System" 76 7.2 Annotation Guideline for "Extraction of Drug-Food Interactions from the Abtracts of Biomedical Articles" 100๋ฐ•

    Development of linguistic linked open data resources for collaborative data-intensive research in the language sciences

    Get PDF
    Making diverse data in linguistics and the language sciences open, distributed, and accessible: perspectives from language/language acquistiion researchers and technical LOD (linked open data) researchers. This volume examines the challenges inherent in making diverse data in linguistics and the language sciences open, distributed, integrated, and accessible, thus fostering wide data sharing and collaboration. It is unique in integrating the perspectives of language researchers and technical LOD (linked open data) researchers. Reporting on both active research needs in the field of language acquisition and technical advances in the development of data interoperability, the book demonstrates the advantages of an international infrastructure for scholarship in the field of language sciences. With contributions by researchers who produce complex data content and scholars involved in both the technology and the conceptual foundations of LLOD (linguistics linked open data), the book focuses on the area of language acquisition because it involves complex and diverse data sets, cross-linguistic analyses, and urgent collaborative research. The contributors discuss a variety of research methods, resources, and infrastructures. Contributors Isabelle Barriรจre, Nan Bernstein Ratner, Steven Bird, Maria Blume, Ted Caldwell, Christian Chiarcos, Cristina Dye, Suzanne Flynn, Claire Foley, Nancy Ide, Carissa Kang, D. Terence Langendoen, Barbara Lust, Brian MacWhinney, Jonathan Masci, Steven Moran, Antonio Pareja-Lora, Jim Reidy, Oya Y. Rieger, Gary F. Simons, Thorsten Trippel, Kara Warburton, Sue Ellen Wright, Claus Zin

    Integrative high-throughput study of arsenic hyper-accumulation in Pteris vittata

    Get PDF
    Arsenic is a natural contaminant in the soil and ground water, which raises considerable concerns in food safety and human health worldwide. The fernPteris vittata (Chinese brake fern) is the first identified arsenic hyperaccumulator[1]. It and its close relatives have un-paralleled ability to tolerant arsenic and feature unique arsenic metabolisms. The focus of the research presented in this thesis is to elucidate the fundamentals of arsenic tolerance and hyper-accumulation in Pteris vittata through high throughput technology and bioinformatics tools. The transcriptome of the P. vittatagametophyte under arsenate stress was obtained using RNA-Seq technology and Trinity de novo assembly. Functional annotation of the transcriptome was performed in terms of blast search, Gene Ontology term assignment, Eukaryotic Orthologous Groups (KOG) classification, and pathway analysis. Differentially expressed genes induced by arsenic stress were identified, which revealed several key players in arsenic hyper-accumulation. As part of the efforts to annotate differentially expressed genes, literature of plant arsenic tolerance was collected and built into a searchable database using the Textpresso text-mining tool, which greatly facilitates the retrieval of biological facts involving arsenic related gene. In addition, an SVM-based named-entity recognition system was constructed to identify new references to genes in literature. The results provide excellent sequence resources for arsenic tolerance study in P.vittata, and establish a platform for integrative study using data of multiple types

    Development of Linguistic Linked Open Data Resources for Collaborative Data-Intensive Research in the Language Sciences

    Get PDF
    This book is the product of an international workshop dedicated to addressing data accessibility in the linguistics field. It is therefore vital to the bookโ€™s mission that its content be open access. Linguistics as a field remains behind many others as far as data management and accessibility strategies. The problem is particularly acute in the subfield of language acquisition, where international linguistic sound files are needed for reference. Linguists' concerns are very much tied to amount of information accumulated by individual researchers over the years that remains fragmented and inaccessible to the larger community. These concerns are shared by other fields, but linguistics to date has seen few efforts at addressing them. This collection, undertaken by a range of leading experts in the field, represents a big step forward. Its international scope and interdisciplinary combination of scholars/librarians/data consultants will provide an important contribution to the field

    A survey on the development status and application prospects of knowledge graph in smart grids

    Full text link
    With the advent of the electric power big data era, semantic interoperability and interconnection of power data have received extensive attention. Knowledge graph technology is a new method describing the complex relationships between concepts and entities in the objective world, which is widely concerned because of its robust knowledge inference ability. Especially with the proliferation of measurement devices and exponential growth of electric power data empowers, electric power knowledge graph provides new opportunities to solve the contradictions between the massive power resources and the continuously increasing demands for intelligent applications. In an attempt to fulfil the potential of knowledge graph and deal with the various challenges faced, as well as to obtain insights to achieve business applications of smart grids, this work first presents a holistic study of knowledge-driven intelligent application integration. Specifically, a detailed overview of electric power knowledge mining is provided. Then, the overview of the knowledge graph in smart grids is introduced. Moreover, the architecture of the big knowledge graph platform for smart grids and critical technologies are described. Furthermore, this paper comprehensively elaborates on the application prospects leveraged by knowledge graph oriented to smart grids, power consumer service, decision-making in dispatching, and operation and maintenance of power equipment. Finally, issues and challenges are summarised.Comment: IET Generation, Transmission & Distributio

    European Language Grid

    Get PDF
    This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 โ€“ to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects
    • โ€ฆ
    corecore