138,072 research outputs found

    DEXTER: An end-to-end system to extract table contents from electronic medical health documents

    Full text link
    In this paper, we propose DEXTER, an end to end system to extract information from tables present in medical health documents, such as electronic health records (EHR) and explanation of benefits (EOB). DEXTER consists of four sub-system stages: i) table detection ii) table type classification iii) cell detection; and iv) cell content extraction. We propose a two-stage transfer learning-based approach using CDeC-Net architecture along with Non-Maximal suppression for table detection. We design a conventional computer vision-based approach for table type classification and cell detection using parameterized kernels based on image size for detecting rows and columns. Finally, we extract the text from the detected cells using pre-existing OCR engine Tessaract. To evaluate our system, we manually annotated a sample of the real-world medical dataset (referred to as Meddata) consisting of wide variations of documents (in terms of appearance) covering different table structures, such as bordered, partially bordered, borderless, or coloured tables. We experimentally show that DEXTER outperforms the commercially available Amazon Textract and Microsoft Azure Form Recognizer systems on the annotated real-world medical datase

    USING MEDICAL OBJECTS FOR CLINICAL RECORDS CLASSIFICATION

    Get PDF
    ABSTRACTIn this paper, medical objects are used as featuresto classify clinical records. Medical objects such as disease names, drug names, symptoms, examination indicators are extracted using an Unstructured Information Management Architecture (UIMA) based system. The extracted medical objects will be used against the  "bag-of-words" as the features of the clinical record in some classification algorithms. The results show that the precision of the classification results using medical objects is better in all algorithms, suggesting that medical objects contribute a significant part to the semantic of a clinical record.Keywords.information extraction, healthcare informatic

    Sensor - Based Human Activity Recognition Using Smartphones

    Get PDF
    It is a significant technical and computational task to provide precise information regarding the activity performed by a human and find patterns of their behavior. Countless applications can be molded and various problems in domains of virtual reality, health and medical, entertainment and security can be solved with advancements in human activity recognition (HAR) systems. HAR is an active field for research for more than a decade, but certain aspects need to be addressed to improve the system and revolutionize the way humans interact with smartphones. This research provides a holistic view of human activity recognition system architecture and discusses various problems associated with the design aspects. It further attempts to showcase the reduction in computational cost and significant achievement in accuracy by methods of feature selection. It also attempts to introduce the use of recurrent neural networks to learn features from the long sequences of time series data, which can contribute towards improving accuracy and reducing dependency on domain knowledge for feature extraction and engineering

    Named Entity Recognition in Electronic Health Records Using Transfer Learning Bootstrapped Neural Networks

    Full text link
    Neural networks (NNs) have become the state of the art in many machine learning applications, especially in image and sound processing [1]. The same, although to a lesser extent [2,3], could be said in natural language processing (NLP) tasks, such as named entity recognition. However, the success of NNs remains dependent on the availability of large labelled datasets, which is a significant hurdle in many important applications. One such case are electronic health records (EHRs), which are arguably the largest source of medical data, most of which lies hidden in natural text [4,5]. Data access is difficult due to data privacy concerns, and therefore annotated datasets are scarce. With scarce data, NNs will likely not be able to extract this hidden information with practical accuracy. In our study, we develop an approach that solves these problems for named entity recognition, obtaining 94.6 F1 score in I2B2 2009 Medical Extraction Challenge [6], 4.3 above the architecture that won the competition. Beyond the official I2B2 challenge, we further achieve 82.4 F1 on extracting relationships between medical terms. To reach this state-of-the-art accuracy, our approach applies transfer learning to leverage on datasets annotated for other I2B2 tasks, and designs and trains embeddings that specially benefit from such transfer.Comment: 11 pages, 4 figures, 8 table
    corecore