5 research outputs found
Identifying Outcomes of Care from Medical Records to Improve Doctor-Patient Communication
Between appointments, healthcare providers have limited interaction with their
patients, but patients have similar patterns of care. Medications have common side
effects; injuries have an expected healing time; and so on. By modeling patient
interventions with outcomes, healthcare systems can equip providers with better
feedback. In this work, we present a pipeline for analyzing medical records according
to an ontology directed at allowing closed-loop feedback between medical encounters.
Working with medical data from multiple domains, we use a combination of data
processing, machine learning, and clinical expertise to extract knowledge from patient
records. While our current focus is on technique, the ultimate goal of this research is
to inform development of a system using these models to provide knowledge-driven
clinical decision-making
An Ontology-based Semantic Tagger for IE system
In this paper, we present a method for the semantic tagging of word chunks extracted from a written transcription of conversations
An Ontology-based Semantic Tagger for IE system
In this paper, we present a method for the semantic tagging of word chunks extracted from a written transcription of conversations. This work is part of an ongoing project for an information extraction system in the field of maritime Search And Rescue (SAR). Our purpose is to automatically annotate parts of texts with concepts from a SAR ontology. Our approach combines two knowledge sources a SAR ontology and the Wordsmyth dictionarythesaurus, and it uses a similarity measure for the classification. Evaluation is carried out by comparing the output of the system with key answers of predefined extraction templates.
Meaning construction in popular science : an investigation into cognitive, digital, and empirical approaches to discourse reification
This thesis uses cognitive linguistics and digital humanities techniques to analyse abstract conceptualization in a corpus of popular science texts. Combining techniques from Conceptual Integration Theory, corpus linguistics, data-mining, cognitive pragmatics and computational linguistics, it presents a unified approach to understanding cross-domain mappings in this area, and through case studies of key extracts, describes how concept integration in these texts operates.
In more detail, Part I of the thesis describes and implements a comprehensive procedure for semantically analysing large bodies of text using the recently- completed database of the Historical Thesaurus of English. Using log-likelihood statistical measures and semantic annotation techniques on a 600,000 word corpus of abstract popular science, this part establishes both the existence and the extent of significant analogical content in the corpus. Part II then identifies samples which are particularly high in analogical content from the corpus, and proposes an adaptation of empirical and corpus methods to support and enhance conceptual integration (sometimes called conceptual blending) analyses, informed by Part I’s methodologies for the study of analogy on a wider scale. Finally, the thesis closes with a detailed analysis, using this methodology, of examples taken from the example corpus. This analysis illustrates those conclusions which can be drawn from such work, completing the methodological chain of reasoning from wide-scale corpora to narrow-focus semantics, and providing data about the nature of highly-abstract popular science as a genre.
The thesis’ original contribution to knowledge is therefore twofold; while contributing to the understanding of the reification of abstractions in discourse, it also focuses on methodological enhancements to existing tools and approaches, aiming to contribute to the established tradition of both analytic and procedural work advancing the digital humanities in the area of language and discourse