136 research outputs found

    Med-e-Tel 2016

    Get PDF

    Holistic System Design for Distributed National eHealth Services

    Get PDF
    publishedVersio

    eHealth in Chronic Diseases

    Get PDF
    This book provides a review of the management of chronic diseases (evaluation and treatment) through eHealth. Studies that examine how eHealth can help to prevent, evaluate, or treat chronic diseases and their outcomes are included

    Validation of design artefacts for blockchain-enabled precision healthcare as a service.

    Get PDF
    Healthcare systems around the globe are currently experiencing a rapid wave of digital disruption. Current research in applying emerging technologies such as Big Data (BD), Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), Augmented Reality (AR), Virtual Reality (VR), Digital Twin (DT), Wearable Sensor (WS), Blockchain (BC) and Smart Contracts (SC) in contact tracing, tracking, drug discovery, care support and delivery, vaccine distribution, management, and delivery. These disruptive innovations have made it feasible for the healthcare industry to provide personalised digital health solutions and services to the people and ensure sustainability in healthcare. Precision Healthcare (PHC) is a new inclusion in digital healthcare that can support personalised needs. It focuses on supporting and providing precise healthcare delivery. Despite such potential, recent studies show that PHC is ineffectual due to the lower patient adoption in the system. Anecdotal evidence shows that people are refraining from adopting PHC due to distrust. This thesis presents a BC-enabled PHC ecosystem that addresses ongoing issues and challenges regarding low opt-in. The designed ecosystem also incorporates emerging information technologies that are potential to address the need for user-centricity, data privacy and security, accountability, transparency, interoperability, and scalability for a sustainable PHC ecosystem. The research adopts Soft System Methodology (SSM) to construct and validate the design artefact and sub-artefacts of the proposed PHC ecosystem that addresses the low opt-in problem. Following a comprehensive view of the scholarly literature, which resulted in a draft set of design principles and rules, eighteen design refinement interviews were conducted to develop the artefact and sub-artefacts for design specifications. The artefact and sub-artefacts were validated through a design validation workshop, where the designed ecosystem was presented to a Delphi panel of twenty-two health industry actors. The key research finding was that there is a need for data-driven, secure, transparent, scalable, individualised healthcare services to achieve sustainability in healthcare. It includes explainable AI, data standards for biosensor devices, affordable BC solutions for storage, privacy and security policy, interoperability, and usercentricity, which prompts further research and industry application. The proposed ecosystem is potentially effective in growing trust, influencing patients in active engagement with real-world implementation, and contributing to sustainability in healthcare

    Preface

    Get PDF

    Augmented Reality and Health Informatics: A Study based on Bibliometric and Content Analysis of Scholarly Communication and Social Media

    Get PDF
    Healthcare outcomes have been shown to improve when technology is used as part of patient care. Health Informatics (HI) is a multidisciplinary study of the design, development, adoption, and application of IT-based innovations in healthcare services delivery, management, and planning. Augmented Reality (AR) is an emerging technology that enhances the user’s perception and interaction with the real world. This study aims to illuminate the intersection of the field of AR and HI. The domains of AR and HI by themselves are areas of significant research. However, there is a scarcity of research on augmented reality as it applies to health informatics. Given both scholarly research and social media communication having contributed to the domains of AR and HI, research methodologies of bibliometric and content analysis on scholarly research and social media communication were employed to investigate the salient features and research fronts of the field. The study used Scopus data (7360 scholarly publications) to identify the bibliometric features and to perform content analysis of the identified research. The Altmetric database (an aggregator of data sources) was used to determine the social media communication for this field. The findings from this study included Publication Volumes, Top Authors, Affiliations, Subject Areas and Geographical Locations from scholarly publications as well as from a social media perspective. The highest cited 200 documents were used to determine the research fronts in scholarly publications. Content Analysis techniques were employed on the publication abstracts as a secondary technique to determine the research themes of the field. The study found the research frontiers in the scholarly communication included emerging AR technologies such as tracking and computer vision along with Surgical and Learning applications. There was a commonality between social media and scholarly communication themes from an applications perspective. In addition, social media themes included applications of AR in Healthcare Delivery, Clinical Studies and Mental Disorders. Europe as a geographic region dominates the research field with 50% of the articles and North America and Asia tie for second with 20% each. Publication volumes show a steep upward slope indicating continued research. Social Media communication is still in its infancy in terms of data extraction, however aggregators like Altmetric are helping to enhance the outcomes. The findings from the study revealed that the frontier research in AR has made an impact in the surgical and learning applications of HI and has the potential for other applications as new technologies are adopted

    The Application of Computer Techniques to ECG Interpretation

    Get PDF
    This book presents some of the latest available information on automated ECG analysis written by many of the leading researchers in the field. It contains a historical introduction, an outline of the latest international standards for signal processing and communications and then an exciting variety of studies on electrophysiological modelling, ECG Imaging, artificial intelligence applied to resting and ambulatory ECGs, body surface mapping, big data in ECG based prediction, enhanced reliability of patient monitoring, and atrial abnormalities on the ECG. It provides an extremely valuable contribution to the field

    A novel framework for predicting patients at risk of readmission

    Get PDF
    Uncertainty in decision-making for patients’ risk of re-admission arises due to non-uniform data and lack of knowledge in health system variables. The knowledge of the impact of risk factors will provide clinicians better decision-making and in reducing the number of patients admitted to the hospital. Traditional approaches are not capable to account for the uncertain nature of risk of hospital re-admissions. More problems arise due to large amount of uncertain information. Patients can be at high, medium or low risk of re-admission, and these strata have ill-defined boundaries. We believe that our model that adapts fuzzy regression method will start a novel approach to handle uncertain data, uncertain relationships between health system variables and the risk of re-admission. Because of nature of ill-defined boundaries of risk bands, this approach does allow the clinicians to target individuals at boundaries. Targeting individuals at boundaries and providing them proper care may provide some ability to move patients from high risk to low risk band. In developing this algorithm, we aimed to help potential users to assess the patients for various risk score thresholds and avoid readmission of high risk patients with proper interventions. A model for predicting patients at high risk of re-admission will enable interventions to be targeted before costs have been incurred and health status have deteriorated. A risk score cut off level would flag patients and result in net savings where intervention costs are much higher per patient. Preventing hospital re-admissions is important for patients, and our algorithm may also impact hospital income

    Health systems data interoperability and implementation

    Get PDF
    Objective The objective of this study was to use machine learning and health standards to address the problem of clinical data interoperability across healthcare institutions. Addressing this problem has the potential to make clinical data comparable, searchable and exchangeable between healthcare providers. Data sources Structured and unstructured data has been used to conduct the experiments in this study. The data was collected from two disparate data sources namely MIMIC-III and NHanes. The MIMIC-III database stored data from two electronic health record systems which are CareVue and MetaVision. The data stored in these systems was not recorded with the same standards; therefore, it was not comparable because some values were conflicting, while one system would store an abbreviation of a clinical concept, the other would store the full concept name and some of the attributes contained missing information. These few issues that have been identified make this form of data a good candidate for this study. From the identified data sources, laboratory, physical examination, vital signs, and behavioural data were used for this study. Methods This research employed a CRISP-DM framework as a guideline for all the stages of data mining. Two sets of classification experiments were conducted, one for the classification of structured data, and the other for unstructured data. For the first experiment, Edit distance, TFIDF and JaroWinkler were used to calculate the similarity weights between two datasets, one coded with the LOINC terminology standard and another not coded. Similar sets of data were classified as matches while dissimilar sets were classified as non-matching. Then soundex indexing method was used to reduce the number of potential comparisons. Thereafter, three classification algorithms were trained and tested, and the performance of each was evaluated through the ROC curve. Alternatively the second experiment was aimed at extracting patient’s smoking status information from a clinical corpus. A sequence-oriented classification algorithm called CRF was used for learning related concepts from the given clinical corpus. Hence, word embedding, random indexing, and word shape features were used for understanding the meaning in the corpus. Results Having optimized all the model’s parameters through the v-fold cross validation on a sampled training set of structured data ( ), out of 24 features, only ( 8) were selected for a classification task. RapidMiner was used to train and test all the classification algorithms. On the final run of classification process, the last contenders were SVM and the decision tree classifier. SVM yielded an accuracy of 92.5% when the and parameters were set to and . These results were obtained after more relevant features were identified, having observed that the classifiers were biased on the initial data. On the other side, unstructured data was annotated via the UIMA Ruta scripting language, then trained through the CRFSuite which comes with the CLAMP toolkit. The CRF classifier obtained an F-measure of 94.8% for “nonsmoker” class, 83.0% for “currentsmoker”, and 65.7% for “pastsmoker”. It was observed that as more relevant data was added, the performance of the classifier improved. The results show that there is a need for the use of FHIR resources for exchanging clinical data between healthcare institutions. FHIR is free, it uses: profiles to extend coding standards; RESTFul API to exchange messages; and JSON, XML and turtle for representing messages. Data could be stored as JSON format on a NoSQL database such as CouchDB, which makes it available for further post extraction exploration. Conclusion This study has provided a method for learning a clinical coding standard by a computer algorithm, then applying that learned standard to unstandardized data so that unstandardized data could be easily exchangeable, comparable and searchable and ultimately achieve data interoperability. Even though this study was applied on a limited scale, in future, the study would explore the standardization of patient’s long-lived data from multiple sources using the SHARPn open-sourced tools and data scaling platformsInformation ScienceM. Sc. (Computing
    • 

    corecore