224 research outputs found

    Knowledge management and Semantic Technology in the Health Care Revolution: Health 3.0 Model

    Get PDF
    Currently, the exploration, improvement, and application of knowledge management and semantic technologies to health care are in a revolution from Health 2.0 to Health 3.0. However, what accurately are knowledge management and semantic technologies and how can they improve a healthcare system? The study aims to review what constitute a Health 3.0 system, and identify key factors in the health care system. First, the study analyzes semantic web, definition of Health 2.0 and Health 3.0, new models for linked data: (1) semantic web and linked data graphs (2) semantic web and healthcare information challenges, OWL and linked knowledge, from linked data to linked knowledge, consistent knowledge representation, and Health 3.0 system. Secondly, the research analyzes two case studies of Health 3.0, and summarizes six key factors that constitute a Health 3.0 system. Finally, the study recommends the application of knowledge management and semantic technologies to Health 3.0 health care model requires the cooperation among emergency care, insurance companies, hospitals, pharmacies, government, specialists, academic researchers, and customer (patients)

    Organizing knowledge to enable personalization of medicine in cancer

    Get PDF
    Interpretation of the clinical significance of genomic alterations remains the most severe bottleneck preventing the realization of personalized medicine in cancer. We propose a knowledge commons to facilitate collaborative contributions and open discussion of clinical decision-making based on genomic events in cancer

    Collect, measure, repeat: Reliability factors for responsible AI data collection

    Get PDF
    The rapid entry of machine learning approaches in our dailyactivities and high-stakes domains demands transparency andscrutiny of their fairness and reliability. To help gauge ma-chine learning models’ robustness, research typically focuseson the massive datasets used for their deployment,e.g., cre-ating and maintaining documentation to understand theirorigin, process of development, and ethical considerations.However, data collection for AI is still typically a one-offpractice, and oftentimes datasets collected for a certain pur-pose or application are reused for a different problem. Addi-tionally, dataset annotations may not be representative overtime, contain ambiguous or erroneous annotations, or be un-able to generalize across domains. Recent research has shownthese practices might lead to unfair, biased, or inaccurate out-comes. We argue that data collection for AI should be per-formed in a responsible manner where the quality of the datais thoroughly scrutinized and measured through a systematicset of appropriate metrics. In this paper, we propose a Re-sponsible AI (RAI) methodology designed to guide the datacollection with a set of metrics for an iterative in-depth analy-sis of thefactors influencing the quality and reliabilityof thegenerated data. We propose a granular set of measurements toinform on theinternal reliabilityof a dataset and itsexternalstabilityover time. We validate our approach across nine ex-isting datasets and annotation tasks and four input modalities.This approach impacts theassessment of data robustnessusedin real world AI applications, where diversity of users andcontent is eminent. Furthermore, it deals with fairness andaccountability aspects in data collection by providing system-atic and transparent quality analysis for data collections

    Surgical Data Science - from Concepts toward Clinical Translation

    Get PDF
    Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process

    Closing Information Gaps with Need-driven Knowledge Sharing

    Get PDF
    Informationslücken schließen durch bedarfsgetriebenen Wissensaustausch Systeme zum asynchronen Wissensaustausch – wie Intranets, Wikis oder Dateiserver – leiden häufig unter mangelnden Nutzerbeiträgen. Ein Hauptgrund dafür ist, dass Informationsanbieter von Informationsuchenden entkoppelt, und deshalb nur wenig über deren Informationsbedarf gewahr sind. Zentrale Fragen des Wissensmanagements sind daher, welches Wissen besonders wertvoll ist und mit welchen Mitteln Wissensträger dazu motiviert werden können, es zu teilen. Diese Arbeit entwirft dazu den Ansatz des bedarfsgetriebenen Wissensaustauschs (NKS), der aus drei Elementen besteht. Zunächst werden dabei Indikatoren für den Informationsbedarf erhoben – insbesondere Suchanfragen – über deren Aggregation eine fortlaufende Prognose des organisationalen Informationsbedarfs (OIN) abgeleitet wird. Durch den Abgleich mit vorhandenen Informationen in persönlichen und geteilten Informationsräumen werden daraus organisationale Informationslücken (OIG) ermittelt, die auf fehlende Informationen hindeuten. Diese Lücken werden mit Hilfe so genannter Mediationsdienste und Mediationsräume transparent gemacht. Diese helfen Aufmerksamkeit für organisationale Informationsbedürfnisse zu schaffen und den Wissensaustausch zu steuern. Die konkrete Umsetzung von NKS wird durch drei unterschiedliche Anwendungen illustriert, die allesamt auf bewährten Wissensmanagementsystemen aufbauen. Bei der Inversen Suche handelt es sich um ein Werkzeug das Wissensträgern vorschlägt Dokumente aus ihrem persönlichen Informationsraum zu teilen, um damit organisationale Informationslücken zu schließen. Woogle erweitert herkömmliche Wiki-Systeme um Steuerungsinstrumente zur Erkennung und Priorisierung fehlender Informationen, so dass die Weiterentwicklung der Wiki-Inhalte nachfrageorientiert gestaltet werden kann. Auf ähnliche Weise steuert Semantic Need, eine Erweiterung für Semantic MediaWiki, die Erfassung von strukturierten, semantischen Daten basierend auf Informationsbedarf der in Form strukturierter Anfragen vorliegt. Die Umsetzung und Evaluation der drei Werkzeuge zeigt, dass bedarfsgetriebener Wissensaustausch technisch realisierbar ist und eine wichtige Ergänzung für das Wissensmanagement sein kann. Darüber hinaus bietet das Konzept der Mediationsdienste und Mediationsräume einen Rahmen für die Analyse und Gestaltung von Werkzeugen gemäß der NKS-Prinzipien. Schließlich liefert der hier vorstellte Ansatz auch Impulse für die Weiterentwicklung von Internetdiensten und -Infrastrukturen wie der Wikipedia oder dem Semantic Web

    On the Readiness of Scientific Data for a Fair and Transparent Use in Machine Learning

    Full text link
    To ensure the fairness and trustworthiness of machine learning (ML) systems, recent legislative initiatives and relevant research in the ML community have pointed out the need to document the data used to train ML models. Besides, data-sharing practices in many scientific domains have evolved in recent years for reproducibility purposes. In this sense, the adoption of these practices by academic institutions has encouraged researchers to publish their data and technical documentation in peer-reviewed publications such as data papers. In this study, we analyze how this scientific data documentation meets the needs of the ML community and regulatory bodies for its use in ML technologies. We examine a sample of 4041 data papers of different domains, assessing their completeness and coverage of the requested dimensions, and trends in recent years, putting special emphasis on the most and least documented dimensions. As a result, we propose a set of recommendation guidelines for data creators and scientific data publishers to increase their data's preparedness for its transparent and fairer use in ML technologies
    • …
    corecore