969 research outputs found

    Simulating activities: Relating motives, deliberation, and attentive coordination

    Get PDF
    Activities are located behaviors, taking time, conceived as socially meaningful, and usually involving interaction with tools and the environment. In modeling human cognition as a form of problem solving (goal-directed search and operator sequencing), cognitive science researchers have not adequately studied “off-task” activities (e.g., waiting), non-intellectual motives (e.g., hunger), sustaining a goal state (e.g., playful interaction), and coupled perceptual-motor dynamics (e.g., following someone). These aspects of human behavior have been considered in bits and pieces in past research, identified as scripts, human factors, behavior settings, ensemble, flow experience, and situated action. More broadly, activity theory provides a comprehensive framework relating motives, goals, and operations. This paper ties these ideas together, using examples from work life in a Canadian High Arctic research station. The emphasis is on simulating human behavior as it naturally occurs, such that “working” is understood as an aspect of living. The result is a synthesis of previously unrelated analytic perspectives and a broader appreciation of the nature of human cognition. Simulating activities in this comprehensive way is useful for understanding work practice, promoting learning, and designing better tools, including human-robot systems

    Designing Tools for the Invisible Art of Game Feel

    Get PDF

    Process Mining Handbook

    Get PDF
    This is an open access book. This book comprises all the single courses given as part of the First Summer School on Process Mining, PMSS 2022, which was held in Aachen, Germany, during July 4-8, 2022. This volume contains 17 chapters organized into the following topical sections: Introduction; process discovery; conformance checking; data preprocessing; process enhancement and monitoring; assorted process mining topics; industrial perspective and applications; and closing

    Advancing antiviral strategies against emerging RNA viruses by phenotypic drug discovery

    Get PDF
    Pathogenic RNA viruses can emerge from unexpected sources at unexpected times and cause severe disease in humans, as exemplified by the ongoing coronavirus disease 19 (COVID-19) pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and Ebola virus (EBOV), Crimean-Congo hemorrhagic fever virus (CCHFV) and Zika virus (ZIKV) outbreaks from the past decade. Despite the increasing impact of emerging viruses to health and economy worldwide, our preparedness to stand against these diseases is hampered by the lack of approved and effective antiviral therapies. Thus, the development of novel antivirals is of urgent need. To date, antiviral drug discovery has primarily focused on targeting specific viral proteins, but these treatments often suffer from viral resistance and are limited to only one or few viruses. Instead, phenotypic drug discovery enables the identification of drug candidates that are active in the disease-relevant model and not restricted to previously characterized biological processes. As RNA viruses are highly dependent on the host cell pathways due to their relatively small genome, targeting virus vulnerabilities within the host cell has been a promising antiviral strategy for broad spectrum antivirals, but is relatively unexplored so far. In fact, phenotypic approaches can additionally identify host-directed antivirals due to the unbiased nature. The focus of this doctoral thesis was to identify novel antiviral compounds with broad spectrum activity and investigate the compound mechanism of action and target pathways from the host cell and virus perspective. To achieve these goals, multiple cutting-edge phenotype-based methodologies were implemented that additionally advanced the antiviral drug discovery landscape. In Paper I, we developed an image-based phenotypic antiviral assay and screened our in-house chemical library targeting cellular oxidative stress and nucleotide metabolism pathways in Hazara virus (HAZV)-infected cells. Screening hit compounds TH3289 and TH6744 activity was validated by their therapeutic window and both compounds were also active beyond HAZV, especially TH3289 that was tested and displayed activity against EBOV, CCHFV, SARS-CoV-2 and a common cold coronavirus 229E (CoV-229E). We also excluded the intended target 8-oxoguanine DNA glycosylase (OGG1) protein to be responsible for TH6744 antiviral activity and characterized host cell chaperone and co-chaperone network as target pathways of TH6744 by implementing thermal proteome profiling methodology. In Paper II, we transferred our image-based phenotypic assay to ZIKV-infected brain cells in order to screen structural analogs of TH3289 and TH6744 against a pathogenic RNA virus. TH3289 and TH6744 again appeared among the screening hits and presented a promising therapeutic window in various cellular models, further confirming their broad activity. Moreover, TH6744 reduced ZIKV infection and progeny release in cerebral organoid model and impressively rescued ZIKV-induced cytotoxicity in organoids. Additionally, treatment with TH6744 rapidly diminished ZIKV progeny release during late replication cycle stages, elucidating the antiviral mechanism of action. In Paper III, we established an untargeted morphological profiling method to provide in-depth host cell responses during antiviral screening. We combined the Cell Painting protocol with antibody-based virus detection in a single assay followed by automated image analysis pipeline providing segmentation and classification of infected cells and extraction of cell morphological features. We demonstrated how our assay reliably distinguished CoV-229E infected human lung fibroblasts from non-infected controls based on cellular morphological features. Furthermore, our method can be applied in phenotypic drug screening as validated by nine host- and virus-targeting antivirals. Effective antivirals Remdesivir and E-64d treatment reversed the infection-specific signatures in host cells. Thereby, the developed method can be implemented for antiviral phenotypic drug discovery by morphological profiling of drug candidates

    Methods for engineering symbolic human behaviour models for activity recognition

    Get PDF
    This work investigates the ability of symbolic models to encode context information that is later used for generating probabilistic models for activity recognition. The contributions of the work are as follows: it shows that it is possible to successfully use symbolic models for activity recognition; it provides a modelling toolkit that contains patterns for reducing the model complexity; it proposes a structured development process for building and evaluating computational causal behaviour models

    Computationally Linking Chemical Exposure to Molecular Effects with Complex Data: Comparing Methods to Disentangle Chemical Drivers in Environmental Mixtures and Knowledge-based Deep Learning for Predictions in Environmental Toxicology

    Get PDF
    Chemical exposures affect the environment and may lead to adverse outcomes in its organisms. Omics-based approaches, like standardised microarray experiments, have expanded the toolbox to monitor the distribution of chemicals and assess the risk to organisms in the environment. The resulting complex data have extended the scope of toxicological knowledge bases and published literature. A plethora of computational approaches have been applied in environmental toxicology considering systems biology and data integration. Still, the complexity of environmental and biological systems given in data challenges investigations of exposure-related effects. This thesis aimed at computationally linking chemical exposure to biological effects on the molecular level considering sources of complex environmental data. The first study employed data of an omics-based exposure study considering mixture effects in a freshwater environment. We compared three data-driven analyses in their suitability to disentangle mixture effects of chemical exposures to biological effects and their reliability in attributing potentially adverse outcomes to chemical drivers with toxicological databases on gene and pathway levels. Differential gene expression analysis and a network inference approach resulted in toxicologically meaningful outcomes and uncovered individual chemical effects — stand-alone and in combination. We developed an integrative computational strategy to harvest exposure-related gene associations from environmental samples considering mixtures of lowly concentrated compounds. The applied approaches allowed assessing the hazard of chemicals more systematically with correlation-based compound groups. This dissertation presents another achievement toward a data-driven hypothesis generation for molecular exposure effects. The approach combined text-mining and deep learning. The study was entirely data-driven and involved state-of-the-art computational methods of artificial intelligence. We employed literature-based relational data and curated toxicological knowledge to predict chemical-biomolecule interactions. A word embedding neural network with a subsequent feed-forward network was implemented. Data augmentation and recurrent neural networks were beneficial for training with curated toxicological knowledge. The trained models reached accuracies of up to 94% for unseen test data of the employed knowledge base. However, we could not reliably confirm known chemical-gene interactions across selected data sources. Still, the predictive models might derive unknown information from toxicological knowledge sources, like literature, databases or omics-based exposure studies. Thus, the deep learning models might allow predicting hypotheses of exposure-related molecular effects. Both achievements of this dissertation might support the prioritisation of chemicals for testing and an intelligent selection of chemicals for monitoring in future exposure studies.:Table of Contents ... I Abstract ... V Acknowledgements ... VII Prelude ... IX 1 Introduction 1.1 An overview of environmental toxicology ... 2 1.1.1 Environmental toxicology ... 2 1.1.2 Chemicals in the environment ... 4 1.1.3 Systems biological perspectives in environmental toxicology ... 7 Computational toxicology ... 11 1.2.1 Omics-based approaches ... 12 1.2.2 Linking chemical exposure to transcriptional effects ... 14 1.2.3 Up-scaling from the gene level to higher biological organisation levels ... 19 1.2.4 Biomedical literature-based discovery ... 24 1.2.5 Deep learning with knowledge representation ... 27 1.3 Research question and approaches ... 29 2 Methods and Data ... 33 2.1 Linking environmental relevant mixture exposures to transcriptional effects ... 34 2.1.1 Exposure and microarray data ... 34 2.1.2 Preprocessing ... 35 2.1.3 Differential gene expression ... 37 2.1.4 Association rule mining ... 38 2.1.5 Weighted gene correlation network analysis ... 39 2.1.6 Method comparison ... 41 Predicting exposure-related effects on a molecular level ... 44 2.2.1 Input ... 44 2.2.2 Input preparation ... 47 2.2.3 Deep learning models ... 49 2.2.4 Toxicogenomic application ... 54 3 Method comparison to link complex stream water exposures to effects on the transcriptional level ... 57 3.1 Background and motivation ... 58 3.1.1 Workflow ... 61 3.2 Results ... 62 3.2.1 Data preprocessing ... 62 3.2.2 Differential gene expression analysis ... 67 3.2.3 Association rule mining ... 71 3.2.4 Network inference ... 78 3.2.5 Method comparison ... 84 3.2.6 Application case of method integration ... 87 3.3 Discussion ... 91 3.4 Conclusion ... 99 4 Deep learning prediction of chemical-biomolecule interactions ... 101 4.1 Motivation ... 102 4.1.1Workflow ...105 4.2 Results ... 107 4.2.1 Input preparation ... 107 4.2.2 Model selection ... 110 4.2.3 Model comparison ... 118 4.2.4 Toxicogenomic application ... 121 4.2.5 Horizontal augmentation without tail-padding ...123 4.2.6 Four-class problem formulation ... 124 4.2.7 Training with CTD data ... 125 4.3 Discussion ... 129 4.3.1 Transferring biomedical knowledge towards toxicology ... 129 4.3.2 Deep learning with biomedical knowledge representation ...133 4.3.3 Data integration ...136 4.4 Conclusion ... 141 5 Conclusion and Future perspectives ... 143 5.1 Conclusion ... 143 5.1.1 Investigating complex mixtures in the environment ... 144 5.1.2 Complex knowledge from literature and curated databases predict chemical- biomolecule interactions ... 145 5.1.3 Linking chemical exposure to biological effects by integrating CTD ... 146 5.2 Future perspectives ... 147 S1 Supplement Chapter 1 ... 153 S1.1 Example of an estrogen bioassay ... 154 S1.2 Types of mode of action ... 154 S1.3 The dogma of molecular biology ... 157 S1.4 Transcriptomics ... 159 S2 Supplement Chapter 3 ... 161 S3 Supplement Chapter 4 ... 175 S3.1 Hyperparameter tuning results ... 176 S3.2 Functional enrichment with predicted chemical-gene interactions and CTD reference pathway genesets ... 179 S3.3 Reduction of learning rate in a model with large word embedding vectors ... 183 S3.4 Horizontal augmentation without tail-padding ... 183 S3.5 Four-relationship classification ... 185 S3.6 Interpreting loss observations for SemMedDB trained models ... 187 List of Abbreviations ... i List of Figures ... vi List of Tables ... x Bibliography ... xii Curriculum scientiae ... xxxix Selbständigkeitserklärung ... xlii

    Technological roadmap on AI planning and scheduling

    Get PDF
    At the beginning of the new century, Information Technologies had become basic and indispensable constituents of the production and preparation processes for all kinds of goods and services and with that are largely influencing both the working and private life of nearly every citizen. This development will continue and even further grow with the continually increasing use of the Internet in production, business, science, education, and everyday societal and private undertaking. Recent years have shown, however, that a dramatic enhancement of software capabilities is required, when aiming to continuously provide advanced and competitive products and services in all these fast developing sectors. It includes the development of intelligent systems – systems that are more autonomous, flexible, and robust than today’s conventional software. Intelligent Planning and Scheduling is a key enabling technology for intelligent systems. It has been developed and matured over the last three decades and has successfully been employed for a variety of applications in commerce, industry, education, medicine, public transport, defense, and government. This document reviews the state-of-the-art in key application and technical areas of Intelligent Planning and Scheduling. It identifies the most important research, development, and technology transfer efforts required in the coming 3 to 10 years and shows the way forward to meet these challenges in the short-, medium- and longer-term future. The roadmap has been developed under the regime of PLANET – the European Network of Excellence in AI Planning. This network, established by the European Commission in 1998, is the co-ordinating framework for research, development, and technology transfer in the field of Intelligent Planning and Scheduling in Europe. A large number of people have contributed to this document including the members of PLANET non- European international experts, and a number of independent expert peer reviewers. All of them are acknowledged in a separate section of this document. Intelligent Planning and Scheduling is a far-reaching technology. Accepting the challenges and progressing along the directions pointed out in this roadmap will enable a new generation of intelligent application systems in a wide variety of industrial, commercial, public, and private sectors

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Understanding the bi-directional relationship between analytical processes and interactive visualization systems

    Get PDF
    Interactive visualizations leverage the human visual and reasoning systems to increase the scale of information with which we can effectively work, therefore improving our ability to explore and analyze large amounts of data. Interactive visualizations are often designed with target domains in mind, such as analyzing unstructured textual information, which is a main thrust in this dissertation. Since each domain has its own existing procedures of analyzing data, a good start to a well-designed interactive visualization system is to understand the domain experts' workflow and analysis processes. This dissertation recasts the importance of understanding domain users' analysis processes and incorporating such understanding into the design of interactive visualization systems. To meet this aim, I first introduce considerations guiding the gathering of general and domain-specific analysis processes in text analytics. Two interactive visualization systems are designed by following the considerations. The first system is Parallel-Topics, a visual analytics system supporting analysis of large collections of documents by extracting semantically meaningful topics. Based on lessons learned from Parallel-Topics, this dissertation further presents a general visual text analysis framework, I-Si, to present meaningful topical summaries and temporal patterns, with the capability to handle large-scale textual information. Both systems have been evaluated by expert users and deemed successful in addressing domain analysis needs. The second contribution lies in preserving domain users' analysis process while using interactive visualizations. Our research suggests the preservation could serve multiple purposes. On the one hand, it could further improve the current system. On the other hand, users often need help in recalling and revisiting their complex and sometimes iterative analysis process with an interactive visualization system. This dissertation introduces multiple types of evidences available for capturing a user's analysis process within an interactive visualization and analyzes cost/benefit ratios of the capturing methods. It concludes that tracking interaction sequences is the most un-intrusive and feasible way to capture part of a user's analysis process. To validate this claim, a user study is presented to theoretically analyze the relationship between interactions and problem-solving processes. The results indicate that constraining the way a user interacts with a mathematical puzzle does have an effect on the problemsolving process. As later evidenced in an evaluative study, a fair amount of high-level analysis can be recovered through merely analyzing interaction logs
    • …
    corecore