1,895 research outputs found

    The Knowledge Graph Construction in the Educational Domain: Take an Australian School Science Course as an Example

    Get PDF
    The evolution of the Internet technology and artificial intelligence has changed the ways we gain knowledge, which has expanded to every aspect of our lives. In recent years, Knowledge Graphs technology as one of the artificial intelligence techniques has been widely used in the educational domain. However, there are few studies dedicating the construction of knowledge graphs for K-10 education in Australia, and most of the existing studies only focus on at the theory level, and little research shows practical pipeline steps to complete the complex flow of constructing the educational knowledge graph. Apart from that, most studies focused on concept entities and their relations but ignored the features of concept entities and the relations between learning knowledge points and required learning outcomes. To overcome these shortages and provide the data foundation for the development of downstream research and applications in this educational domain, the construction processes of building a knowledge graph for Australian K-10 education were analyzed at the theory level and implemented in a practical way in this research. We took the Year 9 science course as a typical data source example fed to the proposed method called K10EDU-RCF-KG to construct this educational knowledge graph and to enrich the features of entities in the knowledge graph. In the construction pipeline, a variety of techniques were employed to complete the building process. Firstly, the POI and OCR techniques were applied to convert Word and PDF format files into text, followed by developing an educational resources management platform where the machine-readable text could be stored in a relational database management system. Secondly, we designed an architecture framework as the guidance of the construction pipeline. According to this architecture, the educational ontology was initially designed, and a backend microservice was developed to process the entity extraction and relation extraction by NLP-NER and probabilistic association rule mining algorithms, respectively. We also adopted the NLP-POS technique to find out the neighbor adjectives related to entitles to enrich features of these concept entitles. In addition, a subject dictionary was introduced during the refinement process of the knowledge graph, which reduced the data noise rate of the knowledge graph entities. Furthermore, the connections between learning outcome entities and topic knowledge point entities were directly connected, which provides a clear and efficient way to identify what corresponding learning objectives are related to the learning unit. Finally, a set of REST APIs for querying this educational knowledge graph were developed

    The Potential of Big Data Research in HealthCare for Medical Doctors’ Learning

    Get PDF
    The main goal of this article is to identify the main dimensions of a model proposal for increasing the potential of big data research in Healthcare for medical doctors’ (MDs’) learning, which appears as a major issue in continuous medical education and learning. The paper employs a systematic literature review of main scientific databases (PubMed and Google Scholar), using the VOSviewer software tool, which enables the visualization of scientific landscapes. The analysis includes a co-authorship data analysis as well as the co-occurrence of terms and keywords. The results lead to the construction of the learning model proposed, which includes four health big data key areas for MDs’ learning: 1) data transformation is related to the learning that occurs through medical systems; 2) health intelligence includes the learning regarding health innovation based on predictions and forecasting processes; 3) data leveraging regards the learning about patient information; and 4) the learning process is related to clinical decision-making, focused on disease diagnosis and methods to improve treatments. Practical models gathered from the scientific databases can boost the learning process and revolutionise the medical industry, as they store the most recent knowledge and innovative research

    The Web as a Corpus and for Building corpora in the Teaching of Specialised Translation: The Example of Texts in Healthcare

    Get PDF
    Abstract: One of the key issues faced by translators and translation students of specialised texts is finding the equivalents of terms in L2 of the field in question. A greater challenge, however, is the formation of the textual environment with the appropriate collocations (adjectives, nouns, verbs) for those terms in the language for special purposes (LSP). The web offers the most convenient and immediate solution by providing access to updated language data presenting the terms in original contexts that help overcome the shortcomings of hard copy lexicographic resources. Taking into account the importance of documentation skills in the training of translators of specialised texts, this paper examines the use of the Web as a Mega Corpus that can be read directly with Google and as a means for constructing corpora automatically with the help of the WebBootCat software. The texts dealt with in this paper are from the healthcare field, which is an important sector of the public service. Resumen: Uno de los retos clave a que se enfrentan los traductores de textos especializados y los estudiantes de traducción es encontrar los equivalentes de términos en la L2 del área en cuestión. Sin embargo, aún mayor resulta el reto de conformar el ambiente textual con las colocaciones apropiadas (adjetivos, substantivos, verbos) alrededor de esos términos. La red ofrece la solución más conveniente e inmediata al otorgar acceso a datos lingüísticos actualizados que presentan los términos en contextos originales que ayudan a pasarse de las deficiencias de los recursos lexicográficos en forma de libro. Tomando en consideración la importancia de las capacidades de documentarse en la formación de traductores de textos especializados, en este artículo se examinará el uso de la Red como un Mega Corpus que se puede leer directamente con Google y como medio de construcción de córpora de manera automática con la ayuda del soporte WebBootCat. Los textos tratados en este trabajo provienen del área de la salud, que es un sector importante de los servicios públicos

    The Artificial Intelligence in Digital Pathology and Digital Radiology: Where Are We?

    Get PDF
    This book is a reprint of the Special Issue entitled "The Artificial Intelligence in Digital Pathology and Digital Radiology: Where Are We?". Artificial intelligence is extending into the world of both digital radiology and digital pathology, and involves many scholars in the areas of biomedicine, technology, and bioethics. There is a particular need for scholars to focus on both the innovations in this field and the problems hampering integration into a robust and effective process in stable health care models in the health domain. Many professionals involved in these fields of digital health were encouraged to contribute with their experiences. This book contains contributions from various experts across different fields. Aspects of the integration in the health domain have been faced. Particular space was dedicated to overviewing the challenges, opportunities, and problems in both radiology and pathology. Clinal deepens are available in cardiology, the hystopathology of breast cancer, and colonoscopy. Dedicated studies were based on surveys which investigated students and insiders, opinions, attitudes, and self-perception on the integration of artificial intelligence in this field

    Text Mining of Patient Demographics and Diagnoses from Psychiatric Assessments

    Get PDF
    Automatic extraction of patient demographics and psychiatric diagnoses from clinical notes allows for the collection of patient data on a large scale. This data could be used for a variety of research purposes including outcomes studies or developing clinical trials. However, current research has not yet discussed the automatic extraction of demographics and psychiatric diagnoses in detail. The aim of this study is to apply text mining to extract patient demographics - age, gender, marital status, education level, and admission diagnoses from the psychiatric assessments at a mental health hospital and also assign codes to each category. Gender is coded as either Male or Female, marital status is coded as either Single, Married, Divorced, or Widowed, and education level can be coded starting with Some High School through Graduate Degree (PhD/JD/MD etc. Level). Classifications for diagnoses are based on the DSM-IV. For each category, a rule-based approach was developed utilizing keyword-based regular expressions as well as constituency trees and typed dependencies. We employ a two-step approach that first maximizes recall through the development of keyword-based patterns and if necessary, maximizes precision by using NLP-based rules to handle the problem of ambiguity. To develop and evaluate our method, we annotated a corpus of 200 assessments, using a portion of the corpus for developing the method and the rest as a test set. F-score was satisfactory for each category (Age: 0.997; Gender: 0.989; Primary Diagnosis: 0.983; Marital Status: 0.875; Education Level: 0.851) as was coding accuracy (Age: 1.0; Gender: 0.989; Primary Diagnosis: 0.922; Marital Status: 0.889; Education Level: 0.778). These results indicate that a rule-based approach could be considered for extracting these types of information in the psychiatric field. At the same time, the results showed a drop in performance from the development set to the test set, which is partly due to the need for more generality in the rules developed

    Language modelling for clinical natural language understanding and generation

    Get PDF
    One of the long-standing objectives of Artificial Intelligence (AI) is to design and develop algorithms for social good including tackling public health challenges. In the era of digitisation, with an unprecedented amount of healthcare data being captured in digital form, the analysis of the healthcare data at scale can lead to better research of diseases, better monitoring patient conditions and more importantly improving patient outcomes. However, many AI-based analytic algorithms rely solely on structured healthcare data such as bedside measurements and test results which only account for 20% of all healthcare data, whereas the remaining 80% of healthcare data is unstructured including textual data such as clinical notes and discharge summaries which is still underexplored. Conventional Natural Language Processing (NLP) algorithms that are designed for clinical applications rely on the shallow matching, templates and non-contextualised word embeddings which lead to limited understanding of contextual semantics. Though recent advances in NLP algorithms have demonstrated promising performance on a variety of NLP tasks in the general domain with contextualised language models, most of these generic NLP algorithms struggle at specific clinical NLP tasks which require biomedical knowledge and reasoning. Besides, there is limited research to study generative NLP algorithms to generate clinical reports and summaries automatically by considering salient clinical information. This thesis aims to design and develop novel NLP algorithms especially clinical-driven contextualised language models to understand textual healthcare data and generate clinical narratives which can potentially support clinicians, medical scientists and patients. The first contribution of this thesis focuses on capturing phenotypic information of patients from clinical notes which is important to profile patient situation and improve patient outcomes. The thesis proposes a novel self-supervised language model, named Phenotypic Intelligence Extraction (PIE), to annotate phenotypes from clinical notes with the detection of contextual synonyms and the enhancement to reason with numerical values. The second contribution is to demonstrate the utility and benefits of using phenotypic features of patients in clinical use cases by predicting patient outcomes in Intensive Care Units (ICU) and identifying patients at risk of specific diseases with better accuracy and model interpretability. The third contribution is to propose generative models to generate clinical narratives to automate and accelerate the process of report writing and summarisation by clinicians. This thesis first proposes a novel summarisation language model named PEGASUS which surpasses or is on par with the state-of-the-art performance on 12 downstream datasets including biomedical literature from PubMed. PEGASUS is further extended to generate medical scientific documents from input tabular data.Open Acces

    Doctor of Philosophy

    Get PDF
    dissertationPublic health surveillance systems are crucial for the timely detection and response to public health threats. Since the terrorist attacks of September 11, 2001, and the release of anthrax in the following month, there has been a heightened interest in public health surveillance. The years immediately following these attacks were met with increased awareness and funding from the federal government which has significantly strengthened the United States surveillance capabilities; however, despite these improvements, there are substantial challenges faced by today's public health surveillance systems. Problems with the current surveillance systems include: a) lack of leveraging unstructured public health data for surveillance purposes; and b) lack of information integration and the ability to leverage resources, applications or other surveillance efforts due to systems being built on a centralized model. This research addresses these problems by focusing on the development and evaluation of new informatics methods to improve the public health surveillance. To address the problems above, we first identified a current public surveillance workflow which is affected by the problems described and has the opportunity for enhancement through current informatics techniques. The 122 Mortality Surveillance for Pneumonia and Influenza was chosen as the primary use case for this dissertation work. The second step involved demonstrating the feasibility of using unstructured public health data, in this case death certificates. For this we created and evaluated a pipeline iv composed of a detection rule and natural language processor, for the coding of death certificates and the identification of pneumonia and influenza cases. The second problem was addressed by presenting the rationale of creating a federated model by leveraging grid technology concepts and tools for the sharing and epidemiological analyses of public health data. As a case study of this approach, a secured virtual organization was created where users are able to access two grid data services, using death certificates from the Utah Department of Health, and two analytical grid services, MetaMap and R. A scientific workflow was created using the published services to replicate the mortality surveillance workflow. To validate these approaches, and provide proofs-of-concepts, a series of real-world scenarios were conducted

    Artificial intelligence applications in disease diagnosis and treatment: recent progress and outlook

    Get PDF
    The use of computers and other technologies to replicate human-like intelligent behaviour and critical thinking is known as artificial intelligence (AI).The development of AI-assisted applications and big data research has accelerated as a result of the rapid advancements in computing power, sensor technology, and platform accessibility that have accompanied advances in artificial intelligence. AI models and algorithms for planning and diagnosing endodontic procedures. The search engine evaluated information on artificial intelligence (AI) and its function in the field of endodontics, and it also incorporated databases like Google Scholar, PubMed, and Science Direct with the search criterion of original research articles published in English. Online appointment scheduling, online check-in at medical facilities, digitization of medical records, reminder calls for follow-up appointments and immunisation dates for children and pregnant women, as well as drug dosage algorithms and adverse effect warnings when prescribing multidrug combinations, are just a few of the tasks that already use artificial intelligence. Data from the review supported the conclusion that AI can play a significant role in endodontics, including the identification of apical lesions, classification and numbering of teeth, detection of dental caries, periodontitis, and periapical disease, diagnosis of various dental problems, aiding dentists in making referrals, and helping them develop more precise treatment plans for dental disorders. Although artificial intelligence (AI) has the potential to drastically alter how medicine is practised in ways that were previously unthinkable, many of its practical applications are still in their infancy and need additional research and development. Over the past ten years, artificial intelligence in ophthalmology has grown significantly and will continue to do so as imaging techniques and data processing algorithms improve
    • …
    corecore