1,250 research outputs found

    11th German Conference on Chemoinformatics (GCC 2015) : Fulda, Germany. 8-10 November 2015.

    Get PDF

    FAIR and bias-free network modules for mechanism-based disease redefinitions

    Get PDF
    Even though chronic diseases are the cause of 60% of all deaths around the world, the underlying causes for most of them are not fully understood. Hence, diseases are defined based on organs and symptoms, and therapies largely focus on mitigating symptoms rather than cure. This is also reflected in the most commonly used disease classifications. The complex nature of diseases, however, can be better defined in terms of networks of molecular interactions. This research applies the approaches of network medicine – a field that uses network science for identifying and treating diseases – to multiple diseases with highly unmet medical need such as stroke and hypertension. The results show the success of this approach to analyse complex disease networks and predict drug targets for different conditions, which are validated through preclinical experiments and are currently in human clinical trials

    Generation and Applications of Knowledge Graphs in Systems and Networks Biology

    Get PDF
    The acceleration in the generation of data in the biomedical domain has necessitated the use of computational approaches to assist in its interpretation. However, these approaches rely on the availability of high quality, structured, formalized biomedical knowledge. This thesis has the two goals to improve methods for curation and semantic data integration to generate high granularity biological knowledge graphs and to develop novel methods for using prior biological knowledge to propose new biological hypotheses. The first two publications describe an ecosystem for handling biological knowledge graphs encoded in the Biological Expression Language throughout the stages of curation, visualization, and analysis. Further, the second two publications describe the reproducible acquisition and integration of high-granularity knowledge with low contextual specificity from structured biological data sources on a massive scale and support the semi-automated curation of new content at high speed and precision. After building the ecosystem and acquiring content, the last three publications in this thesis demonstrate three different applications of biological knowledge graphs in modeling and simulation. The first demonstrates the use of agent-based modeling for simulation of neurodegenerative disease biomarker trajectories using biological knowledge graphs as priors. The second applies network representation learning to prioritize nodes in biological knowledge graphs based on corresponding experimental measurements to identify novel targets. Finally, the third uses biological knowledge graphs and develops algorithmics to deconvolute the mechanism of action of drugs, that could also serve to identify drug repositioning candidates. Ultimately, the this thesis lays the groundwork for production-level applications of drug repositioning algorithms and other knowledge-driven approaches to analyzing biomedical experiments

    Current Challenges in the Application of Algorithms in Multi-institutional Clinical Settings

    Get PDF
    The Coronavirus disease pandemic has highlighted the importance of artificial intelligence in multi-institutional clinical settings. Particularly in situations where the healthcare system is overloaded, and a lot of data is generated, artificial intelligence has great potential to provide automated solutions and to unlock the untapped potential of acquired data. This includes the areas of care, logistics, and diagnosis. For example, automated decision support applications could tremendously help physicians in their daily clinical routine. Especially in radiology and oncology, the exponential growth of imaging data, triggered by a rising number of patients, leads to a permanent overload of the healthcare system, making the use of artificial intelligence inevitable. However, the efficient and advantageous application of artificial intelligence in multi-institutional clinical settings faces several challenges, such as accountability and regulation hurdles, implementation challenges, and fairness considerations. This work focuses on the implementation challenges, which include the following questions: How to ensure well-curated and standardized data, how do algorithms from other domains perform on multi-institutional medical datasets, and how to train more robust and generalizable models? Also, questions of how to interpret results and whether there exist correlations between the performance of the models and the characteristics of the underlying data are part of the work. Therefore, besides presenting a technical solution for manual data annotation and tagging for medical images, a real-world federated learning implementation for image segmentation is introduced. Experiments on a multi-institutional prostate magnetic resonance imaging dataset showcase that models trained by federated learning can achieve similar performance to training on pooled data. Furthermore, Natural Language Processing algorithms with the tasks of semantic textual similarity, text classification, and text summarization are applied to multi-institutional, structured and free-text, oncology reports. The results show that performance gains are achieved by customizing state-of-the-art algorithms to the peculiarities of the medical datasets, such as the occurrence of medications, numbers, or dates. In addition, performance influences are observed depending on the characteristics of the data, such as lexical complexity. The generated results, human baselines, and retrospective human evaluations demonstrate that artificial intelligence algorithms have great potential for use in clinical settings. However, due to the difficulty of processing domain-specific data, there still exists a performance gap between the algorithms and the medical experts. In the future, it is therefore essential to improve the interoperability and standardization of data, as well as to continue working on algorithms to perform well on medical, possibly, domain-shifted data from multiple clinical centers

    Methods for explaining biological systems and high-throughput data

    Get PDF

    Application of AOPs to assist regulatory assessment of chemical risks – Case studies, needs and recommendations

    Get PDF
    While human regulatory risk assessment (RA) still largely relies on animal studies, new approach methodologies (NAMs) based on in vitro, in silico or non-mammalian alternative models are increasingly used to evaluate chemical hazards. Moreover, human epidemiological studies with biomarkers of effect (BoE) also play an invaluable role in identifying health effects associated with chemical exposures. To move towards the next generation risk assessment (NGRA), it is therefore crucial to establish bridges between NAMs and standard approaches, and to establish processes for increasing mechanistically-based biological plausibility in human studies. The Adverse Outcome Pathway (AOP) framework constitutes an important tool to address these needs but, despite a significant increase in knowledge and awareness, the use of AOPs in chemical RA remains limited. The objective of this paper is to address issues related to using AOPs in a regulatory context from various perspectives as it was discussed in a workshop organized within the European Union partnerships HBM4EU and PARC in spring 2022. The paper presents examples where the AOP framework has been proven useful for the human RA process, particularly in hazard prioritization and characterization, in integrated approaches to testing and assessment (IATA), and in the identification and validation of BoE in epidemiological studies. Nevertheless, several limitations were identified that hinder the optimal usability and acceptance of AOPs by the regulatory community including the lack of quantitative information on response-response relationships and of efficient ways to map chemical data (exposure and toxicity) onto AOPs. The paper summarizes suggestions, ongoing initiatives and third-party tools that may help to overcome these obstacles and thus assure better implementation of AOPs in the NGRA.European Commission 733032 857560 101057014Ministry of Education, Youth and Sports by the RECETOX Research Infrastructure LM2018121OP RDE project CETOCOEN Excellence CZ.02.1.01/0.0/0.0/17_043/0009632Japan Agency for Medical Research and Development (AMED) JP21mk0101216 JP22mk0101216Ministry of Education, Culture, Sports, Science and Technology, Japan (MEXT)Japan Society for the Promotion of ScienceGrants-in-Aid for Scientific Research (KAKENHI) 21K1213

    Application of AOPs to assist regulatory assessment of chemical risks - Case studies, needs and recommendations

    Get PDF
    While human regulatory risk assessment (RA) still largely relies on animal studies, new approach methodologies (NAMs) based on in vitro, in silico or non-mammalian alternative models are increasingly used to evaluate chemical hazards. Moreover, human epidemiological studies with biomarkers of effect (BoE) also play an invaluable role in identifying health effects associated with chemical exposures. To move towards the next generation risk assessment (NGRA), it is therefore crucial to establish bridges between NAMs and standard approaches, and to establish processes for increasing mechanistically-based biological plausibility in human studies. The Adverse Outcome Pathway (AOP) framework constitutes an important tool to address these needs but, despite a significant increase in knowledge and awareness, the use of AOPs in chemical RA remains limited. The objective of this paper is to address issues related to using AOPs in a regulatory context from various perspectives as it was discussed in a workshop organized within the European Union partnerships HBM4EU and PARC in spring 2022. The paper presents examples where the AOP framework has been proven useful for the human RA process, particularly in hazard prioritization and characterization, in integrated approaches to testing and assessment (IATA), and in the identification and validation of BoE in epidemiological studies. Nevertheless, several limitations were identified that hinder the optimal usability and acceptance of AOPs by the regulatory community including the lack of quantitative information on response-response relationships and of efficient ways to map chemical data (exposure and toxicity) onto AOPs. The paper summarizes suggestions, ongoing initiatives and third-party tools that may help to overcome these obstacles and thus assure better implementation of AOPs in the NGRA

    It Matters Who’s Watching: The Impacts of Surveillance on Business Board Games

    Get PDF
    Gametools wished to understand the impact of surveillance on participants playing business board games. Our team used video observations, questionnaires, and group interviews to collect, analyze, and compare players’ experience and behavior while playing blended and non-blended board games. The data collected through questionnaires and interviews indicates that surveillance caused by blending business board games does not impact player behavior, social interaction, experience, or decisions. Rather, it is who analyzes the data collected that impacts players the most

    Crowdsourcing for Engineering Design: Objective Evaluations and Subjective Preferences

    Full text link
    Crowdsourcing enables designers to reach out to large numbers of people who may not have been previously considered when designing a new product, listen to their input by aggregating their preferences and evaluations over potential designs, aiming to improve ``good'' and catch ``bad'' design decisions during the early-stage design process. This approach puts human designers--be they industrial designers, engineers, marketers, or executives--at the forefront, with computational crowdsourcing systems on the backend to aggregate subjective preferences (e.g., which next-generation Brand A design best competes stylistically with next-generation Brand B designs?) or objective evaluations (e.g., which military vehicle design has the best situational awareness?). These crowdsourcing aggregation systems are built using probabilistic approaches that account for the irrationality of human behavior (i.e., violations of reflexivity, symmetry, and transitivity), approximated by modern machine learning algorithms and optimization techniques as necessitated by the scale of data (millions of data points, hundreds of thousands of dimensions). This dissertation presents research findings suggesting the unsuitability of current off-the-shelf crowdsourcing aggregation algorithms for real engineering design tasks due to the sparsity of expertise in the crowd, and methods that mitigate this limitation by incorporating appropriate information for expertise prediction. Next, we introduce and interpret a number of new probabilistic models for crowdsourced design to provide large-scale preference prediction and full design space generation, building on statistical and machine learning techniques such as sampling methods, variational inference, and deep representation learning. Finally, we show how these models and algorithms can advance crowdsourcing systems by abstracting away the underlying appropriate yet unwieldy mathematics, to easier-to-use visual interfaces practical for engineering design companies and governmental agencies engaged in complex engineering systems design.PhDDesign ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133438/1/aburnap_1.pd
    • …
    corecore