28 research outputs found

    Bayes Factors for Mixed Models: a Discussion

    Get PDF
    van Doorn et al. (2021) outlined various questions that arise when conducting Bayesian model comparison for mixed effects models. Seven response articles offered their own perspective on the preferred setup for mixed model comparison, on the most appropriate specification of prior distributions, and on the desirability of default recommendations. This article presents a round-table discussion that aims to clarify outstanding issues, explore common ground, and outline practical considerations for any researcher wishing to conduct a Bayesian mixed effects model comparison

    HHEX is a transcriptional regulator of the VEGFC/FLT4/PROX1 signaling axis during vascular development.

    Get PDF
    Formation of the lymphatic system requires the coordinated expression of several key regulators: vascular endothelial growth factor C (VEGFC), its receptor FLT4, and a key transcriptional effector, PROX1. Yet, how expression of these signaling components is regulated remains poorly understood. Here, using a combination of genetic and molecular approaches, we identify the transcription factor hematopoietically expressed homeobox (HHEX) as an upstream regulator of VEGFC, FLT4, and PROX1 during angiogenic sprouting and lymphatic formation in vertebrates. By analyzing zebrafish mutants, we found that hhex is necessary for sprouting angiogenesis from the posterior cardinal vein, a process required for lymphangiogenesis. Furthermore, studies of mammalian HHEX using tissue-specific genetic deletions in mouse and knockdowns in cultured human endothelial cells reveal its highly conserved function during vascular and lymphatic development. Our findings that HHEX is essential for the regulation of the VEGFC/FLT4/PROX1 axis provide insights into the molecular regulation of lymphangiogenesis

    Bayes factors for mixed models: A discussion

    Get PDF
    van Doorn et al. (2021) outlined various questions that arise when conducting Bayesian model comparison for mixed effects models. Seven response articles offered their own perspective on the preferred setup for mixed model comparison, on the most appropriate specification of prior distributions, and on the desirability of default recommendations. This article presents a round-table discussion that aims to clarify outstanding issues, explore common ground, and outline practical considerations for any researcher wishing to conduct a Bayesian mixed effects model comparison

    Asymmetric division coordinates collective cell migration in angiogenesis

    Get PDF
    The asymmetric division of stem or progenitor cells generates daughters with distinct fates and regulates cell diversity during tissue morphogenesis. However, roles for asymmetric division in other more dynamic morphogenetic processes, such as cell migration, have not previously been described. Here we combine zebrafish in vivo experimental and computational approaches to reveal that heterogeneity introduced by asymmetric division generates multicellular polarity that drives coordinated collective cell migration in angiogenesis. We find that asymmetric positioning of the mitotic spindle during endothelial tip cell division generates daughters of distinct size with discrete ‘tip’ or ‘stalk’ thresholds of pro-migratory Vegfr signalling. Consequently, post-mitotic Vegfr asymmetry drives Dll4/Notch-independent self-organization of daughters into leading tip or trailing stalk cells, and disruption of asymmetry randomizes daughter tip/stalk selection. Thus, asymmetric division seamlessly integrates cell proliferation with collective migration, and, as such, may facilitate growth of other collectively migrating tissues during development, regeneration and cancer invasion

    Data aggregation can lead to biased inferences in Bayesian linear mixed models

    No full text
    Bayesian linear mixed-effects models are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors recommend data aggregation at the by-subject level and running Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference to demonstrate that null hypothesis tests can yield biased Bayes factors, when computed from aggregated data. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item variance is present but ignored in the analysis. We also perform corresponding frequentist analyses (type I and II error probabilities) to illustrate that the same problems exist and are well known from frequentist tools. These problems can be circumvented by running Bayesian linear mixed-effects models on non-aggregated data such as on individual trials and by explicitly modeling the full random effects structure. Reproducible code is available from https://osf.io/mjf47/

    Data aggregation can lead to biased inferences in Bayesian linear mixed models

    No full text
    Bayesian linear mixed-effects models are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors recommend data aggregation at the by-subject level and running Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference to demonstrate that null hypothesis tests can yield biased Bayes factors, when computed from aggregated data. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item variance is present but ignored in the analysis. We also perform corresponding frequentist analyses (type I and II error probabilities) to illustrate that the same problems exist and are well known from frequentist tools. These problems can be circumvented by running Bayesian linear mixed-effects models on non-aggregated data such as on individual trials and by explicitly modeling the full random effects structure. Reproducible code is available from https://osf.io/mjf47/

    Sample size determination for bayesian hierarchical models commonly used in psycholinguistics

    No full text
    We discuss an important issue that is not directly related to the main theses of the van Doorn et al. (Computational Brain and Behavior, 2021) paper, but which frequently comes up when using Bayesian linear mixed models: how to determine sample size in advance of running a study when planning a Bayes factor analysis. We adapt a simulation-based method proposed by Wang and Gelfand (Statistical Science 193–208, 2002) for a Bayes factor-based design analysis, and demonstrate how relatively complex hierarchical models can be used to determine approximate sample sizes for planning experiment

    Sample size determination for bayesian hierarchical models commonly used in psycholinguistics

    No full text
    We discuss an important issue that is not directly related to the main theses of the van Doorn et al. (Computational Brain and Behavior, 2021) paper, but which frequently comes up when using Bayesian linear mixed models: how to determine sample size in advance of running a study when planning a Bayes factor analysis. We adapt a simulation-based method proposed by Wang and Gelfand (Statistical Science 193–208, 2002) for a Bayes factor-based design analysis, and demonstrate how relatively complex hierarchical models can be used to determine approximate sample sizes for planning experiment

    Workflow techniques for the robust use of Bayes factors

    No full text
    Inferences about hypotheses are ubiquitous in the cognitive sciences. Bayes factors provide one general way to compare different hypotheses by their compatibility with the observed data. Those quantifications can then also be used to choose between hypotheses. While Bayes factors provide an immediate approach to hypothesis testing, they are highly sensitive to details of the data/model assumptions. Moreover it's not clear how straightforwardly this approach can be implemented in practice, and in particular how sensitive it is to the details of the computational implementation. Here, we investigate these questions for Bayes factor analyses in the cognitive sciences. We explain the statistics underlying Bayes factors as a tool for Bayesian inferences and discuss that utility functions are needed for principled decisions on hypotheses. Next, we study how Bayes factors misbehave under different conditions. This includes a study of errors in the estimation of Bayes factors. Importantly, it is unknown whether Bayes factor estimates based on bridge sampling are unbiased for complex analyses. We are the first to use simulation-based calibration as a tool to test the accuracy of Bayes factor estimates. Moreover, we study how stable Bayes factors are against different MCMC draws. We moreover study how Bayes factors depend on variation in the data. We also look at variability of decisions based on Bayes factors and how to optimize decisions using a utility function. We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis, and we illustrate this workflow using an example from the cognitive sciences. We hope that this study will provide a workflow to test the strengths and limitations of Bayes factors as a way to quantify evidence in support of scientific hypotheses. Reproducible code is available from https://osf.io/y354c/
    corecore