24,711 research outputs found

    Big data analytics in healthcare: promise and potential

    Full text link
    Objective To describe the promise and potential of big data analytics in healthcare. Methods The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. Results The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Conclusions Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome

    Cognitive Computing Creates Value In Healthcare and Shows Potential for Business Value

    Get PDF
    This research paper examines cognitive computing relative to how businesses in healthcare may use cognitive systems to analyze big data to create a competitive advantage. It explains the underlying technologies, such as machine learning and natural language processing, and gives an overview of the technology driving the world\u27s most popular cognitive computing system, IBM Watson. It examines case studies that show businesses applying cognitive systems to derive value from big data and discusses how this may be used to develop business value and provide analysis for strategic processing. It also touches on challenges of cognitive computing. The paper concludes with lessons learned and future research

    Adoption of precision medicine: limitations and considerations

    Get PDF
    Research is ongoing all over the world for identifying the barriers and finding effective solutions to accelerate the projection of Precision Medicine (PM) in the healthcare industry. Yet there has not been a valid and practical model to tackle the several challenges that have slowed down the widespread of this clinical practice. This study aimed to highlight the major limitations and considerations for implementing Precision Medicine. The two theories Diffusion of Innovation and Socio-Technical are employed to discuss the success indicators of PM adoption. Throughout the theoretical assessment, two key theoretical gaps are identified and related findings are discussed.FCT – Fundação para a Ciência e Tecnologia within the Projects Scope: DSAIPA/DS/0084/201

    Health systems data interoperability and implementation

    Get PDF
    Objective The objective of this study was to use machine learning and health standards to address the problem of clinical data interoperability across healthcare institutions. Addressing this problem has the potential to make clinical data comparable, searchable and exchangeable between healthcare providers. Data sources Structured and unstructured data has been used to conduct the experiments in this study. The data was collected from two disparate data sources namely MIMIC-III and NHanes. The MIMIC-III database stored data from two electronic health record systems which are CareVue and MetaVision. The data stored in these systems was not recorded with the same standards; therefore, it was not comparable because some values were conflicting, while one system would store an abbreviation of a clinical concept, the other would store the full concept name and some of the attributes contained missing information. These few issues that have been identified make this form of data a good candidate for this study. From the identified data sources, laboratory, physical examination, vital signs, and behavioural data were used for this study. Methods This research employed a CRISP-DM framework as a guideline for all the stages of data mining. Two sets of classification experiments were conducted, one for the classification of structured data, and the other for unstructured data. For the first experiment, Edit distance, TFIDF and JaroWinkler were used to calculate the similarity weights between two datasets, one coded with the LOINC terminology standard and another not coded. Similar sets of data were classified as matches while dissimilar sets were classified as non-matching. Then soundex indexing method was used to reduce the number of potential comparisons. Thereafter, three classification algorithms were trained and tested, and the performance of each was evaluated through the ROC curve. Alternatively the second experiment was aimed at extracting patient’s smoking status information from a clinical corpus. A sequence-oriented classification algorithm called CRF was used for learning related concepts from the given clinical corpus. Hence, word embedding, random indexing, and word shape features were used for understanding the meaning in the corpus. Results Having optimized all the model’s parameters through the v-fold cross validation on a sampled training set of structured data ( ), out of 24 features, only ( 8) were selected for a classification task. RapidMiner was used to train and test all the classification algorithms. On the final run of classification process, the last contenders were SVM and the decision tree classifier. SVM yielded an accuracy of 92.5% when the and parameters were set to and . These results were obtained after more relevant features were identified, having observed that the classifiers were biased on the initial data. On the other side, unstructured data was annotated via the UIMA Ruta scripting language, then trained through the CRFSuite which comes with the CLAMP toolkit. The CRF classifier obtained an F-measure of 94.8% for “nonsmoker” class, 83.0% for “currentsmoker”, and 65.7% for “pastsmoker”. It was observed that as more relevant data was added, the performance of the classifier improved. The results show that there is a need for the use of FHIR resources for exchanging clinical data between healthcare institutions. FHIR is free, it uses: profiles to extend coding standards; RESTFul API to exchange messages; and JSON, XML and turtle for representing messages. Data could be stored as JSON format on a NoSQL database such as CouchDB, which makes it available for further post extraction exploration. Conclusion This study has provided a method for learning a clinical coding standard by a computer algorithm, then applying that learned standard to unstandardized data so that unstandardized data could be easily exchangeable, comparable and searchable and ultimately achieve data interoperability. Even though this study was applied on a limited scale, in future, the study would explore the standardization of patient’s long-lived data from multiple sources using the SHARPn open-sourced tools and data scaling platformsInformation ScienceM. Sc. (Computing
    • …
    corecore