9 research outputs found

    The development of an ontology model for early identification of children with specific learning disabilities

    Get PDF
    Ontology-based knowledge representation is explored in special education environment as not much attention has been given to the area of specific learning disabilities such as dyslexia, dysgraphia and dyscalculia. Therefore, this paper aims to capture the knowledge in special education domain, represent the knowledge using ontology-based approach and make it efficient for early identification of children who might have specific learning disabilities. In this paper, the step-by-step development process of the ontology is presented by following the five phases of ontological engineering approach, which consists of specification, conceptualization, formalization, implementation, and maintenance. The details of the ontological model’s content and structure is built and the applicability of the ontology for early identification and recommendation is demonstrated

    Five-year trajectories of multimorbidity patterns in an elderly Mediterranean population using Hidden Markov Models

    Get PDF
    This is the final version. Available on open access from Nature Research via the DOI in this recordThis study aimed to analyse the trajectories and mortality of multimorbidity patterns in patients aged 65 to 99 years in Catalonia (Spain). Five year (2012–2016) data of 916,619 participants from a primary care, population-based electronic health record database (Information System for Research in Primary Care, SIDIAP) were included in this retrospective cohort study. Individual longitudinal trajectories were modelled with a Hidden Markov Model across multimorbidity patterns. We computed the mortality hazard using Cox regression models to estimate survival in multimorbidity patterns. Ten multimorbidity patterns were originally identified and two more states (death and drop-outs) were subsequently added. At baseline, the most frequent cluster was the Non-Specific Pattern (42%), and the least frequent the Multisystem Pattern (1.6%). Most participants stayed in the same cluster over the 5 year follow-up period, from 92.1% in the Nervous, Musculoskeletal pattern to 59.2% in the Cardio-Circulatory and Renal pattern. The highest mortality rates were observed for patterns that included cardio-circulatory diseases: Cardio-Circulatory and Renal (37.1%); Nervous, Digestive and Circulatory (31.8%); and Cardio-Circulatory, Mental, Respiratory and Genitourinary (28.8%). This study demonstrates the feasibility of characterizing multimorbidity patterns along time. Multimorbidity trajectories were generally stable, although changes in specific multimorbidity patterns were observed. The Hidden Markov Model is useful for modelling transitions across multimorbidity patterns and mortality risk. Our findings suggest that health interventions targeting specific multimorbidity patterns may reduce mortality in patients with multimorbidity.Carlos III Institute of Health, Ministry of Economy and Competitiveness (Spain)European Regional Development FundDepartment of Health of the Catalan GovernmentCatalan Governmen

    Knowledge-based incremental induction of clinical algorithms

    Get PDF
    The current approaches for the induction of medical procedural knowledge suffer from several drawbacks: the structures produced may not be explicit medical structures, they are only based on statistical measures that do not necessarily respect medical criteria which can be essential to guarantee medical correct structures, or they are not prepared to deal with the incremental arrival of new data. In this thesis we propose a methodology to automatically induce medically correct clinical algorithms (CAs) from hospital databases. These CAs are represented according to the SDA knowledge model. The methodology considers relevant background knowledge and it is able to work in an incremental way. The methodology has been tested in the domains of hypertension, diabetes mellitus and the comborbidity of both diseases. As a result, we propose a repository of background knowledge for these pathologies and provide the SDA diagrams obtained. Later analyses show that the results are medically correct and comprehensible when validated with health care professionals

    Machine learning of structured and unstructured healthcare data

    Get PDF
    The widespread adoption of Electronic Health Records (EHR) systems in healthcare institutions in the United States makes machine learning based on large-scale and real-world clinical data feasible and affordable. Machine learning of healthcare data, or healthcare data analytics, has achieved numerous successes in various applications. However, there are still many challenges for machine learning of healthcare data both structured and unstructured. Longitudinal structured clinical data (e.g., lab test results, diagnoses, and medications) have an enormous variety of categories, are collected at irregularly spaced visits, and are sparsely distributed. Studies on analyzing longitudinal structured EHR data for tasks such as disease prediction and visualization are still limited. For unstructured clinical notes, existing studies mostly focus on disease prediction or cohort selection. Studies on mining clinical notes with the direct purpose to reduce costs for healthcare providers or institutions are limited. To fill in these gaps, this dissertation has three research topics.The first topic is about developing state-of-the-art predictive models to detect diabetic retinopathy using longitudinal structured EHR data. Major deep-learning-based temporal models for disease prediction are studied, implemented, and evaluated. Experimental results on a large-scale dataset show that temporal deep learning models outperform non-temporal random forests models in terms of AUPRC and recall.The second topic is about clustering temporal disease networks to visualize comorbidity progression. We propose a clustering technique to outline comorbidity progression phases as well as a new disease clustering method to simplify the visualization. Two case studies on Clostridioides difficile and stroke show the methods are effective.The third topic is clinical information extraction for medical billing. We propose a framework that consists of two methods, a rule-based and a deep-learning-based, to extract patient history information directly from clinical notes to facilitate the Evaluation and Management Services (E/M) billing. Initial results of the two prototype systems on an annotated dataset are promising and direct us for potential improvements

    Combinación de clustering, selección de atributos y métodos ontológicos para la clasificación semántica de texto

    Get PDF
    Con el aumento exponencial en la cantidad de datos textuales disponibles en Internet desde fuentes diversas como redes sociales, blogs/foros, sitios web, correos electrónicos, bibliotecas en línea, etc., se ha hecho necesaria la utilización de la Inteligencia Artificial en plataformas digitales, como la aplicación de métodos de aprendizaje profundo y de reconocimiento de patrones, para que esta información pueda ser aprovechada por todo tipo de modelos de negocios, estudios de mercado, planes de marketing, campañas políticas o toma de decisiones estratégicas entre otros, con la finalidad de hacer frente a la competencia y dar respuesta de manera eficiente. El objetivo de esta tesis doctoral fue desarrollar un modelo que combina clustering, selección de atributos y métodos ontológicos para la clasificación semántica de texto, que permita estructurar una metodología aplicable en conjuntos de datos textuales y así mejorar la clasificación automática de texto. El modelo propuesto en esta tesis doctoral se realizó siguiendo los siguientes objetivos específicos: redactar el estado del arte relacionado con la temática estudiada; conformación de un conjunto de datos textuales lo suficientemente extenso para la aplicación de las diferentes técnicas de análisis de datos; desarrollo de una metodología para la clasificación semántica de datos textuales y evaluación de los resultados obtenidos. La metodología consistió de 9 etapas, las 5 primeras (preprocesamiento, clustering, se- lección de atributos, clasificación y test estadístico. Posteriormente 4 etapas adicionales correspondientes análisis ontológico (validación del clúster, análisis semántico, interpretación y representación de relaciones). Se pudo determinar que haciendo SToWVector junto con selección de atributos mediante el wrapper MOES (estrategia de búsqueda) y NaiveBayesMultinomial (evaluador) con ACC (métrica), se obtienen mejores resultados con el clasificador NaiveBayesMultinomial que con otros métodos de clasificación evaluados. Además el método de búsqueda ENORA ha sido utilizado y evaluado demostrando ser un método eficaz para la selección de atributos en datos textuales. De igual manera se pudo dar significado a los dos clústeres obtenidos, logrando identificar un concepto para cada clúster. Clúster 1: UE-G20-G77-MEC y clúster 2: Resto del mundo. Ello permitió establecer una relación directa entre los clústers.With the exponential increase in the amount of textual data available on the Internet from various sources such as: social networks, blogs/forums, websites, emails, online libraries, etc. It has made necessary the use of artificial intelligence in digital platforms, the application of parallel processing, deep learning and pattern recognition so that this information can be used by all kinds of models business, market research, marketing plans, political campaigns or making strategic decisions among others, in order to deal with competition and respond efficiently. This doctoral thesis is focused on developing a model that allows combine clustering, attribute selection and ontological methods for the semantic classification of text, which allows tructuring an applicable methodology in textual data sets to improve the automatic classification of text. The model proposed in this doctoral thesis is carried out following the following specific objectives: draft the status of the art related to the theme studied, conformation of a set of textual data extensive enough for the application of different data analysis techniques, development of a methodology for the semantic classification of textual data and evaluation of the results obtained. The methodology consisted of 9 stages, the first 5 (preprocessing, clustering, attribute selection, classification, and statistical test. Finally, 4 additional stages corresponding to ontological analysis (cluster validation, semantic analysis, interpretation, and relationship representation). Could determine that by doing SToWVector together with feature selection using the MOES wrapper (search strategy) and NaiveBayesMultinomial (evaluator) with ACC (metric), better results are obtained with the NaiveBayesMultinomial classifier than with other classification methods evaluated, in addition, the ENORA search method has been used and evaluated, proving to be an effective method for the selection of attributes in text data. In the same way, it was possible to give meaning to the two clusters obtained, managing to identify a concept for each cluster. Cluster 1: EU−G20−G77−MEC and cluster 2: Rest of the world. This allowed us to establish a direct relationship between the clusters

    Recent Advances in Forensic Anthropological Methods and Research

    Get PDF
    Forensic anthropology, while still relatively in its infancy compared to other forensic science disciplines, adopts a wide array of methods from many disciplines for human skeletal identification in medico-legal and humanitarian contexts. The human skeleton is a dynamic tissue that can withstand the ravages of time given the right environment and may be the only remaining evidence left in a forensic case whether a week or decades old. Improved understanding of the intrinsic and extrinsic factors that modulate skeletal tissues allows researchers and practitioners to improve the accuracy and precision of identification methods ranging from establishing a biological profile such as estimating age-at-death, and population affinity, estimating time-since-death, using isotopes for geolocation of unidentified decedents, radiology for personal identification, histology to assess a live birth, to assessing traumatic injuries and so much more
    corecore