9,391 research outputs found

    UMSL Bulletin 2023-2024

    Get PDF
    The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp

    The Influence of Neuroendocrine and Genetic Markers of Stress on Cognitive Processing and Intrusive Symptoms

    Full text link
    This body of research investigated the influence of neuroendocrine and genetic elements of arousal on cognitive processes in the development of intrusive memories and flash-forward intrusions as related to Post-Traumatic Stress Disorder. Specifically, this thesis investigated various mechanisms that may underlie intrusive symptoms as postulated by prevalent theories of PTSD. Study 1 examined the distinctive relationship between peritraumatic dissociation and subsequent re-experiencing symptoms. Network analyses revealed strong positive edges between peritraumatic dissociation and subsequent amnesia, as well as the re-experiencing symptoms of physical reactivity to reminders, flashbacks, intrusions, and dreams, and to a lesser extent emotional numbness and hypervigilance. The finding that peritraumatic dissociation is related to subsequent re-experiencing symptoms is consistent with cognitive models that emphasize the role of dissociative experiences during a traumatic event in the etiology of PTSD re-experiencing symptoms. Study 2 aimed to determine whether peri-traumatic stress, as measured via salivary cortisol and salivary alpha-amylase, as well as pre-existing genetic polymorphisms on the FKBP5 gene increased dissociation and data-driven processing, and subsequently impacted intrusive memories related to a trauma film. The findings revealed that greater noradrenergic arousal predicted less intrusive memory distress in individuals who scored higher on data-driven processing and trait dissociation, and in FKBP5 low-risk carriers. For individuals who reported less data-driven processing and trait dissociation, and in FKBP5 high-risk carriers, as noradrenergic arousal increased, intrusive memory distress increased. This study also showed no association between data-driven processing with memory fragmentation, and fragmentation with intrusive memories. Whilst these findings support some aspect of cognitive models of PTSD as they indicate a role for data-driven processing and dissociation in intrusive symptoms, they highlight a threshold at which these variables stop moderating the relationship between arousal and intrusive memories and suggest that memory fragmentation is not related to intrusive memories. Study 3 examined the role of cognitive control in flash-forward intrusions in the context of an enduring stressor, the COVID-19 pandemic. In line with expectations, results showed that as cognitive control worsened, FKBP5 high-risk carriers reported more flash-forward distress, and low-risk carriers reported less distress. These findings are considered in the context of hippocampal changes and are consistent with emerging theories of PTSD. Lastly, study 4 sought to investigate the role of two neurological processes, pattern separation and pattern completion in intrusive memories in individuals with PTSD compared to trauma exposed controls. Consistent with existing literature, the data indicate that individuals with PTSD reported more data-driven processing, more intrusive symptoms, and demonstrated better behavioural pattern completion than trauma-exposed controls. These findings are in line with current cognitive models of PTSD, as they again indicate a role for data-driven processing in PTSD. However, study 4 found no support for the postulate that deficient pattern separation is a feature of PTSD and found an opposite effect for the role of pattern completion. Whilst these findings are inconsistent with theory, they are in line with existing experimental studies. Overall, the findings from this thesis provide insight into cognitive and biological models of PTSD and shed light on the mechanisms underlying the nature and development of intrusive symptoms

    Transfer Success on the Linda Problem: A Re-Examination Using Dual Process Theory, Learning Material Characteristics, and Individual Differences

    Get PDF
    The Linda problem is an intensely studied task in the literature for judgments where participants judge the probability of various options and frequently make biased judgements known as conjunction errors. Here, I conceptually replicated and extended the finding by Agnoli and Krantz (1989) that when participants are explicitly trained with Venn diagrams to inhibit their heuristics, successful transfer of learning is observed. I tested whether transfer success was maintained: (1) when the purpose of the training was obscured; (2) after controlling for individual differences; and (3) when learning materials did not include visual images. I successfully replicated their finding, identifying transfer success when the purpose of the training was masked and after controlling for individual differences. Furthermore, the effects of individual differences on transfer success depends on both the kind of learning material used and whether the purpose was masked. Hence, these findings support claims that education can inhibit biases

    TeamSTEPPS and Organizational Culture

    Get PDF
    Patient safety issues remain despite several strategies developed for their deterrence. While many safety initiatives bring about improvement, they are repeatedly unsustainable and short-lived. The index hospital’s goal was to build an organizational culture within a groundwork that improves teamwork and continuing healthcare team engagement. Teamwork influences the efficiency of patient care, patient safety, and clinical outcomes, as it has been identified as an approach for enhancing collaboration, decreasing medical errors, and building a culture of safety in healthcare. The facility implemented Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS), an evidence-based framework which was used for team training to produce valuable and needed changes, facilitating modification of organizational culture, increasing patient safety compliance, or solving particular issues. This study aimed to identify the correlation between TeamSTEPPS enactment and improved organizational culture in the ambulatory care nursing department of a New York City public hospital

    Machine Learning Approaches for the Prioritisation of Cardiovascular Disease Genes Following Genome- wide Association Study

    Get PDF
    Genome-wide association studies (GWAS) have revealed thousands of genetic loci, establishing itself as a valuable method for unravelling the complex biology of many diseases. As GWAS has grown in size and improved in study design to detect effects, identifying real causal signals, disentangling from other highly correlated markers associated by linkage disequilibrium (LD) remains challenging. This has severely limited GWAS findings and brought the method’s value into question. Although thousands of disease susceptibility loci have been reported, causal variants and genes at these loci remain elusive. Post-GWAS analysis aims to dissect the heterogeneity of variant and gene signals. In recent years, machine learning (ML) models have been developed for post-GWAS prioritisation. ML models have ranged from using logistic regression to more complex ensemble models such as random forests and gradient boosting, as well as deep learning models (i.e., neural networks). When combined with functional validation, these methods have shown important translational insights, providing a strong evidence-based approach to direct post-GWAS research. However, ML approaches are in their infancy across biological applications, and as they continue to evolve an evaluation of their robustness for GWAS prioritisation is needed. Here, I investigate the landscape of ML across: selected models, input features, bias risk, and output model performance, with a focus on building a prioritisation framework that is applied to blood pressure GWAS results and tested on re-application to blood lipid traits

    Reinforcement learning in large state action spaces

    Get PDF
    Reinforcement learning (RL) is a promising framework for training intelligent agents which learn to optimize long term utility by directly interacting with the environment. Creating RL methods which scale to large state-action spaces is a critical problem towards ensuring real world deployment of RL systems. However, several challenges limit the applicability of RL to large scale settings. These include difficulties with exploration, low sample efficiency, computational intractability, task constraints like decentralization and lack of guarantees about important properties like performance, generalization and robustness in potentially unseen scenarios. This thesis is motivated towards bridging the aforementioned gap. We propose several principled algorithms and frameworks for studying and addressing the above challenges RL. The proposed methods cover a wide range of RL settings (single and multi-agent systems (MAS) with all the variations in the latter, prediction and control, model-based and model-free methods, value-based and policy-based methods). In this work we propose the first results on several different problems: e.g. tensorization of the Bellman equation which allows exponential sample efficiency gains (Chapter 4), provable suboptimality arising from structural constraints in MAS(Chapter 3), combinatorial generalization results in cooperative MAS(Chapter 5), generalization results on observation shifts(Chapter 7), learning deterministic policies in a probabilistic RL framework(Chapter 6). Our algorithms exhibit provably enhanced performance and sample efficiency along with better scalability. Additionally, we also shed light on generalization aspects of the agents under different frameworks. These properties have been been driven by the use of several advanced tools (e.g. statistical machine learning, state abstraction, variational inference, tensor theory). In summary, the contributions in this thesis significantly advance progress towards making RL agents ready for large scale, real world applications

    Using machine learning to predict pathogenicity of genomic variants throughout the human genome

    Get PDF
    Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität. Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores. Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt. Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity. Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants. The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency. In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org

    Studies on genetic and epigenetic regulation of gene expression dynamics

    Get PDF
    The information required to build an organism is contained in its genome and the first biochemical process that activates the genetic information stored in DNA is transcription. Cell type specific gene expression shapes cellular functional diversity and dysregulation of transcription is a central tenet of human disease. Therefore, understanding transcriptional regulation is central to understanding biology in health and disease. Transcription is a dynamic process, occurring in discrete bursts of activity that can be characterized by two kinetic parameters; burst frequency describing how often genes burst and burst size describing how many transcripts are generated in each burst. Genes are under strict regulatory control by distinct sequences in the genome as well as epigenetic modifications. To properly study how genetic and epigenetic factors affect transcription, it needs to be treated as the dynamic cellular process it is. In this thesis, I present the development of methods that allow identification of newly induced gene expression over short timescales, as well as inference of kinetic parameters describing how frequently genes burst and how many transcripts each burst give rise to. The work is presented through four papers: In paper I, I describe the development of a novel method for profiling newly transcribed RNA molecules. We use this method to show that therapeutic compounds affecting different epigenetic enzymes elicit distinct, compound specific responses mediated by different sets of transcription factors already after one hour of treatment that can only be detected when measuring newly transcribed RNA. The goal of paper II is to determine how genetic variation shapes transcriptional bursting. To this end, we infer transcriptome-wide burst kinetics parameters from genetically distinct donors and find variation that selectively affects burst sizes and frequencies. Paper III describes a method for inferring transcriptional kinetics transcriptome-wide using single-cell RNA-sequencing. We use this method to describe how the regulation of transcriptional bursting is encoded in the genome. Our findings show that gene specific burst sizes are dependent on core promoter architecture and that enhancers affect burst frequencies. Furthermore, cell type specific differential gene expression is regulated by cell type specific burst frequencies. Lastly, Paper IV shows how transcription shapes cell types. We collect data on cellular morphologies, electrophysiological characteristics, and measure gene expression in the same neurons collected from the mouse motor cortex. Our findings show that cells belonging to the same, distinct transcriptomic families have distinct and non-overlapping morpho-electric characteristics. Within families, there is continuous and correlated variation in all modalities, challenging the notion of cell types as discrete entities

    Fairness Testing: A Comprehensive Survey and Analysis of Trends

    Full text link
    Unfair behaviors of Machine Learning (ML) software have garnered increasing attention and concern among software engineers. To tackle this issue, extensive research has been dedicated to conducting fairness testing of ML software, and this paper offers a comprehensive survey of existing studies in this field. We collect 100 papers and organize them based on the testing workflow (i.e., how to test) and testing components (i.e., what to test). Furthermore, we analyze the research focus, trends, and promising directions in the realm of fairness testing. We also identify widely-adopted datasets and open-source tools for fairness testing

    Generalizable deep learning based medical image segmentation

    Get PDF
    Deep learning is revolutionizing medical image analysis and interpretation. However, its real-world deployment is often hindered by the poor generalization to unseen domains (new imaging modalities and protocols). This lack of generalization ability is further exacerbated by the scarcity of labeled datasets for training: Data collection and annotation can be prohibitively expensive in terms of labor and costs because label quality heavily dependents on the expertise of radiologists. Additionally, unreliable predictions caused by poor model generalization pose safety risks to clinical downstream applications. To mitigate labeling requirements, we investigate and develop a series of techniques to strengthen the generalization ability and the data efficiency of deep medical image computing models. We further improve model accountability and identify unreliable predictions made on out-of-domain data, by designing probability calibration techniques. In the first and the second part of thesis, we discuss two types of problems for handling unexpected domains: unsupervised domain adaptation and single-source domain generalization. For domain adaptation we present a data-efficient technique that adapts a segmentation model trained on a labeled source domain (e.g., MRI) to an unlabeled target domain (e.g., CT), using a small number of unlabeled training images from the target domain. For domain generalization, we focus on both image reconstruction and segmentation. For image reconstruction, we design a simple and effective domain generalization technique for cross-domain MRI reconstruction, by reusing image representations learned from natural image datasets. For image segmentation, we perform causal analysis of the challenging cross-domain image segmentation problem. Guided by this causal analysis we propose an effective data-augmentation-based generalization technique for single-source domains. The proposed method outperforms existing approaches on a large variety of cross-domain image segmentation scenarios. In the third part of the thesis, we present a novel self-supervised method for learning generic image representations that can be used to analyze unexpected objects of interest. The proposed method is designed together with a novel few-shot image segmentation framework that can segment unseen objects of interest by taking only a few labeled examples as references. Superior flexibility over conventional fully-supervised models is demonstrated by our few-shot framework: it does not require any fine-tuning on novel objects of interest. We further build a publicly available comprehensive evaluation environment for few-shot medical image segmentation. In the fourth part of the thesis, we present a novel probability calibration model. To ensure safety in clinical settings, a deep model is expected to be able to alert human radiologists if it has low confidence, especially when confronted with out-of-domain data. To this end we present a plug-and-play model to calibrate prediction probabilities on out-of-domain data. It aligns the prediction probability in line with the actual accuracy on the test data. We evaluate our method on both artifact-corrupted images and images from an unforeseen MRI scanning protocol. Our method demonstrates improved calibration accuracy compared with the state-of-the-art method. Finally, we summarize the major contributions and limitations of our works. We also suggest future research directions that will benefit from the works in this thesis.Open Acces
    • …
    corecore