328 research outputs found

    Conceptual graph-based knowledge representation for supporting reasoning in African traditional medicine

    Get PDF
    Although African patients use both conventional or modern and traditional healthcare simultaneously, it has been proven that 80% of people rely on African traditional medicine (ATM). ATM includes medical activities stemming from practices, customs and traditions which were integral to the distinctive African cultures. It is based mainly on the oral transfer of knowledge, with the risk of losing critical knowledge. Moreover, practices differ according to the regions and the availability of medicinal plants. Therefore, it is necessary to compile tacit, disseminated and complex knowledge from various Tradi-Practitioners (TP) in order to determine interesting patterns for treating a given disease. Knowledge engineering methods for traditional medicine are useful to model suitably complex information needs, formalize knowledge of domain experts and highlight the effective practices for their integration to conventional medicine. The work described in this paper presents an approach which addresses two issues. First it aims at proposing a formal representation model of ATM knowledge and practices to facilitate their sharing and reusing. Then, it aims at providing a visual reasoning mechanism for selecting best available procedures and medicinal plants to treat diseases. The approach is based on the use of the Delphi method for capturing knowledge from various experts which necessitate reaching a consensus. Conceptual graph formalism is used to model ATM knowledge with visual reasoning capabilities and processes. The nested conceptual graphs are used to visually express the semantic meaning of Computational Tree Logic (CTL) constructs that are useful for formal specification of temporal properties of ATM domain knowledge. Our approach presents the advantage of mitigating knowledge loss with conceptual development assistance to improve the quality of ATM care (medical diagnosis and therapeutics), but also patient safety (drug monitoring)

    Approximate Data Mining Techniques on Clinical Data

    Get PDF
    The past two decades have witnessed an explosion in the number of medical and healthcare datasets available to researchers and healthcare professionals. Data collection efforts are highly required, and this prompts the development of appropriate data mining techniques and tools that can automatically extract relevant information from data. Consequently, they provide insights into various clinical behaviors or processes captured by the data. Since these tools should support decision-making activities of medical experts, all the extracted information must be represented in a human-friendly way, that is, in a concise and easy-to-understand form. To this purpose, here we propose a new framework that collects different new mining techniques and tools proposed. These techniques mainly focus on two aspects: the temporal one and the predictive one. All of these techniques were then applied to clinical data and, in particular, ICU data from MIMIC III database. It showed the flexibility of the framework, which is able to retrieve different outcomes from the overall dataset. The first two techniques rely on the concept of Approximate Temporal Functional Dependencies (ATFDs). ATFDs have been proposed, with their suitable treatment of temporal information, as a methodological tool for mining clinical data. An example of the knowledge derivable through dependencies may be "within 15 days, patients with the same diagnosis and the same therapy usually receive the same daily amount of drug". However, current ATFD models are not analyzing the temporal evolution of the data, such as "For most patients with the same diagnosis, the same drug is prescribed after the same symptom". To this extent, we propose a new kind of ATFD called Approximate Pure Temporally Evolving Functional Dependencies (APEFDs). Another limitation of such kind of dependencies is that they cannot deal with quantitative data when some tolerance can be allowed for numerical values. In particular, this limitation arises in clinical data warehouses, where analysis and mining have to consider one or more measures related to quantitative data (such as lab test results and vital signs), concerning multiple dimensional (alphanumeric) attributes (such as patient, hospital, physician, diagnosis) and some time dimensions (such as the day since hospitalization and the calendar date). According to this scenario, we introduce a new kind of ATFD, named Multi-Approximate Temporal Functional Dependency (MATFD), which considers dependencies between dimensions and quantitative measures from temporal clinical data. These new dependencies may provide new knowledge as "within 15 days, patients with the same diagnosis and the same therapy receive a daily amount of drug within a fixed range". The other techniques are based on pattern mining, which has also been proposed as a methodological tool for mining clinical data. However, many methods proposed so far focus on mining of temporal rules which describe relationships between data sequences or instantaneous events, without considering the presence of more complex temporal patterns into the dataset. These patterns, such as trends of a particular vital sign, are often very relevant for clinicians. Moreover, it is really interesting to discover if some sort of event, such as a drug administration, is capable of changing these trends and how. To this extent, we propose a new kind of temporal patterns, called Trend-Event Patterns (TEPs), that focuses on events and their influence on trends that can be retrieved from some measures, such as vital signs. With TEPs we can express concepts such as "The administration of paracetamol on a patient with an increasing temperature leads to a decreasing trend in temperature after such administration occurs". We also decided to analyze another interesting pattern mining technique that includes prediction. This technique discovers a compact set of patterns that aim to describe the condition (or class) of interest. Our framework relies on a classification model that considers and combines various predictive pattern candidates and selects only those that are important to improve the overall class prediction performance. We show that our classification approach achieves a significant reduction in the number of extracted patterns, compared to the state-of-the-art methods based on minimum predictive pattern mining approach, while preserving the overall classification accuracy of the model. For each technique described above, we developed a tool to retrieve its kind of rule. All the results are obtained by pre-processing and mining clinical data and, as mentioned before, in particular ICU data from MIMIC III database

    Computational Advances in Drug Safety: Systematic and Mapping Review of Knowledge Engineering Based Approaches

    Get PDF
    Drug Safety (DS) is a domain with significant public health and social impact. Knowledge Engineering (KE) is the Computer Science discipline elaborating on methods and tools for developing “knowledge-intensive” systems, depending on a conceptual “knowledge” schema and some kind of “reasoning” process. The present systematic and mapping review aims to investigate KE-based approaches employed for DS and highlight the introduced added value as well as trends and possible gaps in the domain. Journal articles published between 2006 and 2017 were retrieved from PubMed/MEDLINE and Web of Science® (873 in total) and filtered based on a comprehensive set of inclusion/exclusion criteria. The 80 finally selected articles were reviewed on full-text, while the mapping process relied on a set of concrete criteria (concerning specific KE and DS core activities, special DS topics, employed data sources, reference ontologies/terminologies, and computational methods, etc.). The analysis results are publicly available as online interactive analytics graphs. The review clearly depicted increased use of KE approaches for DS. The collected data illustrate the use of KE for various DS aspects, such as Adverse Drug Event (ADE) information collection, detection, and assessment. Moreover, the quantified analysis of using KE for the respective DS core activities highlighted room for intensifying research on KE for ADE monitoring, prevention and reporting. Finally, the assessed use of the various data sources for DS special topics demonstrated extensive use of dominant data sources for DS surveillance, i.e., Spontaneous Reporting Systems, but also increasing interest in the use of emerging data sources, e.g., observational healthcare databases, biochemical/genetic databases, and social media. Various exemplar applications were identified with promising results, e.g., improvement in Adverse Drug Reaction (ADR) prediction, detection of drug interactions, and novel ADE profiles related with specific mechanisms of action, etc. Nevertheless, since the reviewed studies mostly concerned proof-of-concept implementations, more intense research is required to increase the maturity level that is necessary for KE approaches to reach routine DS practice. In conclusion, we argue that efficiently addressing DS data analytics and management challenges requires the introduction of high-throughput KE-based methods for effective knowledge discovery and management, resulting ultimately, in the establishment of a continuous learning DS system

    emerging technologies for food and drug safety

    Get PDF
    Abstract Emerging technologies are playing a major role in the generation of new approaches to assess the safety of both foods and drugs. However, the integration of emerging technologies in the regulatory decision-making process requires rigorous assessment and consensus amongst international partners and research communities. To that end, the Global Coalition for Regulatory Science Research (GCRSR) in partnership with the Brazilian Health Surveillance Agency (ANVISA) hosted the seventh Global Summit on Regulatory Science (GSRS17) in Brasilia, Brazil on September 18–20, 2017 to discuss the role of new approaches in regulatory science with a specific emphasis on applications in food and medical product safety. The global regulatory landscape concerning the application of new technologies was assessed in several countries worldwide. Challenges and issues were discussed in the context of developing an international consensus for objective criteria in the development, application and review of emerging technologies. The need for advanced approaches to allow for faster, less expensive and more predictive methodologies was elaborated. In addition, the strengths and weaknesses of each new approach was discussed. And finally, the need for standards and reproducible approaches was reviewed to enhance the application of the emerging technologies to improve food and drug safety. The overarching goal of GSRS17 was to provide a venue where regulators and researchers meet to develop collaborations addressing the most pressing scientific challenges and facilitate the adoption of novel technical innovations to advance the field of regulatory science

    Discovering Patient Phenotypes Using Generalized Low Rank Models

    Get PDF
    The practice of medicine is predicated on discovering commonalities or distinguishing characteristics among patients to inform corresponding treatment. Given a patient grouping (hereafter referred to as a p henotype ), clinicians can implement a treatment pathway accounting for the underlying cause of disease in that phenotype. Traditionally, phenotypes have been discovered by intuition, experience in practice, and advancements in basic science, but these approaches are often heuristic, labor intensive, and can take decades to produce actionable knowledge. Although our understanding of disease has progressed substantially in the past century, there are still important domains in which our phenotypes are murky, such as in behavioral health or in hospital settings. To accelerate phenotype discovery, researchers have used machine learning to find patterns in electronic health records, but have often been thwarted by missing data, sparsity, and data heterogeneity. In this study, we use a flexible framework called Generalized Low Rank Modeling (GLRM) to overcome these barriers and discover phenotypes in two sources of patient data. First, we analyze data from the 2010 Healthcare Cost and Utilization Project National Inpatient Sample (NIS), which contains upwards of 8 million hospitalization records consisting of administrative codes and demographic information. Second, we analyze a small (N=1746), local dataset documenting the clinical progression of autism spectrum disorder patients using granular features from the electronic health record, including text from physician notes. We demonstrate that low rank modeling successfully captures known and putative phenotypes in these vastly different datasets

    Business Model Innovation For Potentially Disruptive Technologies: The Case Of Big Pharmaceutical Firms Accommodating Biotechnologies

    Get PDF
    Potenziell disruptive Technologien sind schwer zu vermarkten, weil sie mit Werten verbunden sind, die für etablierte Unternehmen neu sind. Ohne geeignete Geschäftsmodellinnovation gelingt es den etablierten Unternehmen nicht, neue, potenziell disruptive Technologien auf den Markt zu bringen. Die aufkeimende Literatur über disruptive Innovationen bietet nur begrenzte Empfehlungen zu spezifischen Geschäftsmodellelementen, die dazu dienen können, potenziell disruptive Technologien zu integrieren. Um diese Forschungslücke zu schließen, wird in dieser Arbeit untersucht, wie große Pharmaunternehmen Biotechnologien in die Gestaltung ihrer Geschäftsmodellinnovation einbezogen haben, um erfolgreiche Elemente der Geschäftsmodellgestaltung zu ermitteln. Es wird ein qualitativer Forschungsansatz gewählt, der aus drei Studien besteht. Zunächst werden nach einer systematischen Literaturrecherche zur Geschäftsmodellforschung in der pharmazeutischen Industrie 45 Arbeiten ausgewählt und qualitativ ausgewertet. Zweitens werden qualitative halbstrukturierte Interviews mit 16 Experten in großen Pharmaunternehmen geführt. Die Transkripte werden mit der Methode der Qualitativen Inhaltsanalyse ausgewertet. Schließlich wird eine Clusteranalyse durchgeführt, um den von allen digitalen Angeboten großer Pharmaunternehmen vorgeschlagenen und gelieferten Wert zu ermitteln. In dieser Arbeit werden erstmals zwei Geschäftsmodelle großer Pharmaunternehmen aus der Zeit vor und nach der Einführung der Biotechnologien beschrieben. In dieser Arbeit wird argumentiert, dass für die Anpassung an potenziell disruptive Technologien folgende Geschäftsmodellelemente empfohlen werden: Kollaborationsportfolios und digitale Servitisierung. Erstens sollten etablierte Unternehmen ein Portfolio von Kooperationsformaten entwickeln, indem sie die Breite der Partner (einschließlich der Wettbewerber) diversifizieren und alle Aktivitäten in ihrer Wertschöpfungskette abdecken. Zweitens sollten die etablierten Unternehmen den Wert, den sie anbieten, und die Art und Weise, wie sie diesen Wert für etablierte und neue Kundensegmente bereitstellen, innovativ gestalten, indem sie ihre Produkte mit ergänzenden Dienstleistungen bündeln, insbesondere mit solchen, die digital ermöglicht werden. Digitale Dienstleistungen dienen dazu, die Bedürfnisse der Kunden mit denen des Herstellers zu verknüpfen. Neben der Weiterentwicklung der Theorie über disruptive Innovationen können die empfohlenen Elemente des Geschäftsmodells von führenden mittelständischen Pharmaunternehmen (z. B. Fresenius oder Servier) und Unternehmen aus anderen Branchen direkt genutzt werden, um andere potenziell disruptive Technologien zu vermarkten. Diese Forschung unterstützt politische Entscheidungsträger bei der Entwicklung von Strategien zur Förderung der Kommerzialisierung potenziell disruptiver Innovationen in ihrem spezifischen Kontext.Potentially disruptive technologies are challenging to commercialize because they are associated with values new to established firms. Without fitting business model innovation, incumbent firms fail to bring new potentially disruptive technologies to the market. The burgeoning literature on disruptive innovation provides only limited recommendations on specific business model elements that can serve to accommodate potentially disruptive technologies. To close this research gap, this thesis explores how big pharmaceutical firms accommodated biotechnologies in the design of their business model innovation to discover successful business model design elements. A qualitative research approach consisting in three studies is adopted. First, following a systematic literature review on business model research in the pharmaceutical industry, 45 papers are selected and qualitatively analyzed. Second, qualitative semi-structured interviews are conducted with 16 experts in big pharmaceutical firms. The transcripts are analyzed using the qualitative content analysis method. Finally, a cluster analysis is conducted to identify value proposed and delivered by all digital offers of big pharmaceutical firms. This thesis is the first to describe two business model designs of big pharmaceutical firms from before and since the accommodation of biotechnologies. This research argues that business model designs recommended for the accommodation of potentially disruptive technologies are collaboration portfolios and digital servitization. First, established firms should devise a portfolio of collaboration formats by diversifying breadth of partners (including competitors), and by covering all activities in their value chain. Second, incumbent firms should innovate in the value they offer and how they deliver it to mainstream and new customer segments though bundling their products with complementary services, especially those that are digitally enabled. Digital services serve for back-coupling customers’ needs with the producer. Besides advancing theory on disruptive innovation, the recommended business model design elements can be directly used by top midsize pharmaceutical firms (e.g., Fresenius or Servier) and firms from other industries to commercialize other potentially disruptive technologies. This research supports policy makers in devising strategies for the promotion of the commercialization of potentially disruptive innovations in their specific contexts

    An Experimental Study on Microarray Expression Data from Plants under Salt Stress by using Clustering Methods

    Get PDF
    Current Genome-wide advancements in Gene chips technology provide in the “Omics (genomics, proteomics and transcriptomics) research”, an opportunity to analyze the expression levels of thousand of genes across multiple experiments. In this regard, many machine learning approaches were proposed to deal with this deluge of information. Clustering methods are one of these approaches. Their process consists of grouping data (gene profiles) into homogeneous clusters using distance measurements. Various clustering techniques are applied, but there is no consensus for the best one. In this context, a comparison of seven clustering algorithms was performed and tested against the gene expression datasets of three model plants under salt stress. These techniques are evaluated by internal and relative validity measures. It appears that the AGNES algorithm is the best one for internal validity measures for the three plant datasets. Also, K-Means profiles a trend for relative validity measures for these datasets

    Data mining in biomedicine : current applications and further directions for research

    Get PDF
    Author name used in this manuscript: S. K. KwokAuthor name used in this manuscript: A. H. C. Tsang2009-2010 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Integrative Approaches for Predicting In Vivo Effects of Chemicals from their Structural Descriptors and the Results of Short-Term Biological Assays

    Get PDF
    Cheminformatics approaches such as Quantitative Structure Activity Relationship (QSAR) modeling have been used traditionally for predicting chemical toxicity. In recent years, high throughput biological assays have been increasingly employed to elucidate mechanisms of chemical toxicity and predict toxic effects of chemicals in vivo. The data generated in such assays can be considered as biological descriptors of chemicals that can be combined with molecular descriptors and employed in QSAR modeling to improve the accuracy of toxicity prediction. In this review, we discuss several approaches for integrating chemical and biological data for predicting biological effects of chemicals in vivo and compare their performance across several data sets. We conclude that while no method consistently shows superior performance, the integrative approaches rank consistently among the best yet offer enriched interpretation of models over those built with either chemical or biological data alone. We discuss the outlook for such interdisciplinary methods and offer recommendations to further improve the accuracy and interpretability of computational models that predict chemical toxicity
    corecore