389 research outputs found

    An Event-Ontology-Based Approach to Constructing Episodic Knowledge from Unstructured Text Documents

    Get PDF
    Document summarization is an important function for knowledge management when a digital library of text documents grows. It allows documents to be presented in a concise manner for easy reading and understanding. Traditionally, document summarization adopts sentence-based mechanisms that identify and extract key sentences from long documents and assemble them together. Although that approach is useful in providing an abstract of documents, it cannot extract the relationship or sequence of a set of related events (also called episodes). This paper proposes an event-oriented ontology approach to constructing episodic knowledge to facilitate the understanding of documents. We also empirically evaluated the proposed approach by using instruments developed based on Bloom’s Taxonomy. The result reveals that the approach based on proposed event-oriented ontology outperformed the traditional text summarization approach in capturing conceptual and procedural knowledge, but the latter was still better in delivering factual knowledge

    Big data analytics for preventive medicine

    Get PDF
    © 2019, Springer-Verlag London Ltd., part of Springer Nature. Medical data is one of the most rewarding and yet most complicated data to analyze. How can healthcare providers use modern data analytics tools and technologies to analyze and create value from complex data? Data analytics, with its promise to efficiently discover valuable pattern by analyzing large amount of unstructured, heterogeneous, non-standard and incomplete healthcare data. It does not only forecast but also helps in decision making and is increasingly noticed as breakthrough in ongoing advancement with the goal is to improve the quality of patient care and reduces the healthcare cost. The aim of this study is to provide a comprehensive and structured overview of extensive research on the advancement of data analytics methods for disease prevention. This review first introduces disease prevention and its challenges followed by traditional prevention methodologies. We summarize state-of-the-art data analytics algorithms used for classification of disease, clustering (unusually high incidence of a particular disease), anomalies detection (detection of disease) and association as well as their respective advantages, drawbacks and guidelines for selection of specific model followed by discussion on recent development and successful application of disease prevention methods. The article concludes with open research challenges and recommendations

    Extraction of patterns in selected network traffic for a precise and efficient intrusion detection approach

    Get PDF
    This thesis investigates a precise and efficient pattern-based intrusion detection approach by extracting patterns from sequential adversarial commands. As organisations are further placing assets within the cyber domain, mitigating the potential exposure of these assets is becoming increasingly imperative. Machine learning is the application of learning algorithms to extract knowledge from data to determine patterns between data points and make predictions. Machine learning algorithms have been used to extract patterns from sequences of commands to precisely and efficiently detect adversaries using the Secure Shell (SSH) protocol. Seeing as SSH is one of the most predominant methods of accessing systems it is also a prime target for cyber criminal activities. For this study, deep packet inspection was applied to data acquired from three medium interaction honeypots emulating the SSH service. Feature selection was used to enhance the performance of the selected machine learning algorithms. A pre-processing procedure was developed to organise the acquired datasets to present the sequences of adversary commands per unique SSH session. The preprocessing phase also included generating a reduced version of each dataset that evenly and coherently represents their respective full dataset. This study focused on whether the machine learning algorithms can extract more precise patterns efficiently extracted from the reduced sequence of commands datasets compared to their respective full datasets. Since a reduced sequence of commands dataset requires less storage space compared to the relative full dataset. Machine learning algorithms selected for this study were the Naïve Bayes, Markov chain, Apriori and Eclat algorithms The results show the machine learning algorithms applied to the reduced datasets could extract additional patterns that are more precise, compared to their respective full datasets. It was also determined the Naïve Bayes and Markov chain algorithms are more efficient at processing the reduced datasets compared to their respective full datasets. The best performing algorithm was the Markov chain algorithm at extracting more precise patterns efficiently from the reduced datasets. The greatest improvement in processing a reduced dataset was 97.711%. This study has contributed to the domain of pattern-based intrusion detection by providing an approach that can precisely and efficiently detect adversaries utilising SSH communications to gain unauthorised access to a system

    New Fundamental Technologies in Data Mining

    Get PDF
    The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. The series of books entitled by "Data Mining" address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters. The contributing authors have highlighted many future research directions that will foster multi-disciplinary collaborations and hence will lead to significant development in the field of data mining

    A fuzzy-based medical system for pattern mining in a distributed environment: Application to diagnostic and co-morbidity

    Get PDF
    In this paper we have addressed the extraction of hidden knowledge from medical records using data mining techniques such as association rules in conjunction with fuzzy logic in a distributed environment. A significant challenge in this domain is that although there are a lot of studies devoted to analysing health data, very few focus on the understanding and interpretability of the data and the hidden patterns present within the data. A major challenge in this area is that many health data analysis studies have focussed on classification, prediction or knowledge extraction and end users find little interpretability or understanding of the results. This is due to the use of black-box algorithms or because the nature of the data is not represented correctly. This is why it is necessary to focus the analysis not only on knowledge extraction but also on the transformation and processing of the data to improve the modelling of the nature of the data. Techniques such as association rule mining and fuzzy logic help to improve the interpretability of the data and treat it with the inherent uncertainty of real-world data. To this end, we propose a system that automatically: a) pre-processes the database by transforming and adapting the data for the data mining process and enriching the data to generate more interesting patterns, b) performs the fuzzification of the medical database to represent and analyse real-world medical data with its inherent uncertainty, c) discovers interrelations and patterns amongst different features (diagnostic, hospital discharge, etc.), and d) visualizes the obtained results efficiently to facilitate the analysis and improve the interpretability of the information extracted. Our proposed system yields a significant increase in the compression and interpretability of medical data for end-users, allowing them to analyse the data correctly and make the right decisions. We present one practical case using two health-related datasets to demonstrate the feasibility of our proposal for real data.Junta de Andalucia P18-RT-1765Ministry of Universities through the E

    Exploiting semantics for improving clinical information retrieval

    Get PDF
    Clinical information retrieval (IR) presents several challenges including terminology mismatch and granularity mismatch. One of the main objectives in clinical IR is to fill the semantic gap among the queries and documents and going beyond keywords matching. To address these issues, in this study we attempt to use semantic information to improve the performance of clinical IR systems by representing queries in an expressive and meaningful context. In this study we propose query context modeling to improve the effectiveness of clinical IR systems. To model query contexts we propose two novel approaches to modeling medical query contexts. The first approach concerns modeling medical query contexts based on mining semantic-based AR for improving clinical text retrieval. The query context is derived from the rules that cover the query and then weighted according to their semantic relatedness to the query concepts. In our second approach we model a representative query context by developing query domain ontology. To develop query domain ontology we extract all the concepts that have semantic relationship with the query concept(s) in UMLS ontologies. Query context represents concepts extracted from query domain ontology and weighted according to their semantic relatedness to the query concept(s). The query context is then exploited in the patient records query expansion and re-ranking for improving clinical retrieval performance. We evaluate this approach on the TREC Medical Records dataset. Results show that our proposed approach significantly improves the retrieval performance compare to classic keyword-based IR model

    Онтологія аналізу Big Data

    Get PDF
    The object of this research is the Big Data (BD) analysis processes. One of the most problematic places is the lack of a clear classification of BD analysis methods, the presence of which will greatly facilitate the selection of an optimal and efficient algorithm for analyzing these data depending on their structure.In the course of the study, Data Mining methods, Technologies Tech Mining, MapReduce technology, data visualization, other technologies and analysis techniques were used. This allows to determine their main characteristics and features for constructing a formal analysis model for Big Data. The rules for analyzing Big Data in the form of an ontological knowledge base are developed with the aim of using it to process and analyze any data.A classifier for forming a set of Big Data analysis rules has been obtained. Each BD has a set of parameters and criteria that determine the methods and technologies of analysis. The very purpose of BD, its structure and content determine the techniques and technologies for further analysis. Thanks to the developed ontology of the knowledge base of BD analysis with Protégé 3.4.7 and the set of RABD rules built in them, the process of selecting the methodologies and technologies for further analysis is shortened and the analysis of the selected BD is automated. This is due to the fact that the proposed approach to the analysis of Big Data has a number of features, in particular ontological knowledge base based on modern methods of artificial intelligence.Thanks to this, it is possible to obtain a complete set of Big Data analysis rules. This is possible only if the parameters and criteria of a specific Big Data are analyzed clearly.Исследованы процессы анализа Big Data. Используя разработанную формальную модель и проведенный критический анализ методов и технологий анализа Big Data, построена онтология анализа Big Data. Исследованы методы, модели и инструменты для усовершенствования онтологии аналитики Big Data и эффективной поддержки разработки структурных элементов модели системы поддержки принятия решений по управлению Big Data.Досліджені процеси аналізу Big Data. Використовуючи розроблену формальну модель та проведений критичний аналіз методів і технологій аналізу Big Data, побудовано онтологію аналізу Big Data. Досліджено методи, моделі та інструменти для удосконалення онтології аналітики Big Data та ефективнішої підтримки розроблення структурних елементів моделі системи підтримки прийняття рішень з керування Big Data

    Doctor of Philosophy

    Get PDF
    dissertationWith the growing national dissemination of the electronic health record (EHR), there are expectations that the public will benefit from biomedical research and discovery enabled by electronic health data. Clinical data are needed for many diseases and conditions to meet the demands of rapidly advancing genomic and proteomic research. Many biomedical research advancements require rapid access to clinical data as well as broad population coverage. A fundamental issue in the secondary use of clinical data for scientific research is the identification of study cohorts of individuals with a disease or medical condition of interest. The problem addressed in this work is the need for generalized, efficient methods to identify cohorts in the EHR for use in biomedical research. To approach this problem, an associative classification framework was designed with the goal of accurate and rapid identification of cases for biomedical research: (1) a set of exemplars for a given medical condition are presented to the framework, (2) a predictive rule set comprised of EHR attributes is generated by the framework, and (3) the rule set is applied to the EHR to identify additional patients that may have the specified condition. iv Based on this functionality, the approach was termed the ‘cohort amplification' framework. The development and evaluation of the cohort amplification framework are the subject of this dissertation. An overview of the framework design is presented. Improvements to some standard associative classification methods are described and validated. A qualitative evaluation of predictive rules to identify diabetes cases and a study of the accuracy of identification of asthma cases in the EHR using frameworkgenerated prediction rules are reported. The framework demonstrated accurate and reliable rules to identify diabetes and asthma cases in the EHR and contributed to methods for identification of biomedical research cohorts

    Using a combination of methodologies for improving medical information retrieval performance

    Get PDF
    This thesis presents three approaches to improve the current state of Medical Information Retrieval. At the time of this writing, the health industry is experiencing a massive change in terms of introducing technology into all aspects of health delivery. The work in this thesis involves adapting existing established concepts in the field of Information Retrieval to the field of Medical Information Retrieval. In particular, we apply subtype filtering, ICD-9 codes, query expansion, and re-ranking methods in order to improve retrieval on medical texts. The first method applies association rule mining and cosine similarity measures. The second method applies subtype filtering and the Apriori algorithm. And the third method uses ICD-9 codes in order to improve retrieval accuracy. Overall, we show that the current state of medical information retrieval has substantial room for improvement. Our first two methods do not show significant improvements, while our third approach shows an improvement of up to 20%

    Discovering frequent patterns for in-flight incidents

    Get PDF
    Objectives: In order to get a clearer idea of in-flight medical emergencies management, the application of Data mining tools can be useful to facilitate knowledge discovery from data collected by existing studies. The objective of this work is to conceptualize the construction of a Clinical Decision Support System (CDSS) in three stages corresponding to the representation levels necessary to extract knowledge from information and raw data. Method: The method can be summarized in three parts: (1) in-flight medical incident data search, (2) the validation of this data using Data mining tools, (3) the construction of the CDSS in 3 steps corresponding to the levels of knowledge representation. These three steps will be carried out using tools such as EORCA (Event Oriented Representation for Collaborative Activities) which includes action codification with regard to an ontology and event representation. Result: Data processing services provide a good structuration for information about in-flight medical incidents from which useful knowledge can be generated could improve the handling of other incidents by adapting the medical emergency equipments, for example. This structuring can be facilitated by the use of CDSS to fill in any gaps, increase coherency, and provide decision makers with a more complete picture of options that might be involved in a critical situation. Conclusion: We proposed an evolving framework facilitating the description of in-flight medical emergencies with adequate data collection and appropriate information that are required for producing interesting rules and better decisions. The data collected nourishes the organization of information, which can be improved over time by continuous integration of evidence gained from the number of incidents treated. Finally, it is proposed to strengthen requirements concerning the medical equipments available on-board, particularly in the light of knowledge resulting from the selection and approval of interesting rules
    corecore