22 research outputs found

    Tetris and Word games lead to fewer intrusive memories when applied several days after analogue trauma.

    Get PDF
    Background: Intrusive trauma memories are a key symptom of posttraumatic stress disorder (PTSD), so disrupting their recurrence is highly important. Intrusion development was hindered by visuospatial interventions administered up to 24 hours after analogue trauma. It is unknown whether interventions can be applied later, and whether modality or working-memory load are crucial factors. Objectives: This study tested: (1) whether a visuospatial task would lead to fewer intrusions compared to a reactivation-only group when applied after memory reactivation four days after analogue trauma exposure (extended replication), (2) whether both tasks (i.e. one aimed to be visuospatial, one more verbal) would lead to fewer intrusions than the reactivation-only group (intervention effect), and (3) whether supposed task modality (visuospatial or verbal) is a critical component (modality effect). Method: Fifty-four participants were randomly assigned to reactivation+Tetris (visuospatial), reactivation+Word games (verbal), or reactivation-only (no task). They watched an aversive film (day 0) and recorded intrusive memories of the film in diary A. On day 4, memory was reactivated, after which participants played Tetris, Word games, or had no task for 10 minutes. They then kept a second diary (B). Informative hypotheses were evaluated using Bayes factors. Results: Reactivation+Tetris and reactivation+Word games resulted in relatively fewer intrusions from the last day of diary A to the first day of diary B than reactivation-only (objective 1 and 2). Thus, both tasks were effective even when applied days after analogue trauma. Reactivation-only was not effective. Reactivation+Word games appeared to result in fewer intrusions than reactivation+Tetris (objective 3; modality effect), but this evidence was weak. Explorative analyses showed that Word games were more difficult than Tetris. Conclusions: Applying a task four days after the trauma film (during memory reconsolidation) was effective. The modality versus working-memory load issue is inconclusive

    Predictive power of wastewater for nowcasting infectious disease transmission: A retrospective case study of five sewershed areas in Louisville, Kentucky

    Get PDF
    Background: Epidemiological nowcasting traditionally relies on count surveillance data. The availability and quality of such count data may vary over time, limiting representation of true infections. Wastewater data correlates with traditional surveillance data and may provide additional value for nowcasting disease trends. Methods: We obtained SARS-CoV-2 case, death, wastewater, and serosurvey data for Jefferson County, Kentucky (USA), between August 2020 and March 2021, and parameterized an existing nowcasting model using combinations of these data. We assessed the predictive performance and variability at the sewershed level and compared the effects of adding or replacing wastewater data to case and death reports. Findings: Adding wastewater data minimally improved the predictive performance of nowcasts compared to a model fitted to case and death data (Weighted Interval Score (WIS) 0.208 versus 0.223), and reduced the predictive performance compared to a model fitted to deaths data (WIS 0.517 versus 0.500). Adding wastewater data to deaths data improved the nowcasts agreement to estimates from models using cases and deaths data. These findings were consistent across individual sewersheds as well as for models fit to the aggregated total data of 5 sewersheds. Retrospective reconstructions of epidemiological dynamics created using different combinations of data were in general agreement (coverage \u3e75%). Interpretation: These findings show wastewater data may be valuable for infectious disease nowcasting when clinical surveillance data are absent, such as early in a pandemic or in low-resource settings where systematic collection of epidemiologic data is difficult

    Figuring out what they feel : Exposure to eudaimonic narrative fiction is related to mentalizing ability

    Get PDF
    Being exposed to narrative fiction may provide us with practice in dealing with social interactions and thereby enhance our ability to engage in mentalizing (understanding other people’s mental states). The current study uses a confirmatory Bayesian approach to assess the relationship between mentalizing and both the self-reported frequency of exposure to narrative fiction across media (books, films, and TV series) and the particular types of fiction that are consumed (eudaimonic vs. hedonic). This study focuses on this relationship in children and adolescents, because they are still developing their social abilities. Exposure to narrative fiction may thus be particularly important in providing input on how to interpret other people’s mental states for this age group. In our study, we find no evidence for a simple relationship between overall frequency of narrative fiction exposure and mentalizing ability in this age group. However, exposure to eudaimonic narrative fiction is consistently positively related to mentalizing and, for some media types and aspects of mentalizing, more strongly so than exposure to hedonic narrative fiction. No evidence was obtained to suggest that there are any differential effects related to the medium of the narrative fiction exposure (written vs. visual).acceptedVersio

    The latest update on Bayesian informative hypothesis testing

    No full text
    With the increased use of Bayesian informative hypothesis testing, practical, philosophical and methodological questions arise. This dissertation addresses a few of these questions. One step in the research cycle is to collect data for hypothesis testing. The amount of data required to answer a research question depends on the value of making wrong conclusions. The link between sample size, power and error probabilities is well-researched in the NHST framework. In Bayesian statistics research this relationship is less discussed and the value of power and unconditional error probabilities are debated. Chapter 2 presents four sample size determination methods for informative hypothesis testing by means of Bayes factors. The value of power and (un)conditional error probabilities and their link with sample size for Bayesian hypothesis tests are discussed. Another step in the research cycle is to translate the results from a statistical analysis into a conclusion. The analysis should match the research question to provide a sensible conclusion. Many hypothesis tests concern the presence and direction of *population* effects. However, in practice the conclusions from these hypothesis tests often are at the *individual* level. For example, after analyzing the effectiveness of a medication in the population, it is prescribed to individuals. The average effect does not imply the medicine works for all individuals. In many situations the main interest is in the individual effects rather than population effects. Chapters 3 and 4 describe how Bayesian hypothesis testing can be used to synthesize the results from multiple individual analyses. Bayesian statistics can be used to continuously add data and sequentially update knowledge about population effects. This process is called updating. Alternatively, data from multiple individuals can be analyzed separately and combined to learn about how the homogeneity (similarity) of individual effects. Chapter 3 presents the methodology and Chapter 4 is a hands-on description for how to execute such an analysis. For Chapter 2 an R package has been developed, and for Chapter 3 an R Shiny application has been developed. Both pieces of software are presented in Chapter 6. Chapter 5 discusses the updating cycle in Bayesian statistics and focuses on the starting point of an updating cycle. The information in a Bayes factor is useful to describe how we can update our knowledge. However, knowing the rate with which the relative belief for two hypotheses changes is meaningless if the starting point is unknown. Chapter 5 therefore discusses the importance of prior probabilities and how to specify these for a set of hypotheses. Chapters 7 and 8 present applied research where informative hypotheses are tested with Bayes factors. These are examples of research that commonly are analyzed with NHST and are thus exemplary in what the possibilities with informative hypothesis testing are. In Chapter 7 informative hypotheses are formulated to analyze the data from a repeated measures experiment. Chapter 8 evaluates the presence of a mediated effect at the the individual level by means of Bayesian informative hypothess tests

    The latest update on Bayesian informative hypothesis testing

    No full text
    With the increased use of Bayesian informative hypothesis testing, practical, philosophical and methodological questions arise. This dissertation addresses a few of these questions. One step in the research cycle is to collect data for hypothesis testing. The amount of data required to answer a research question depends on the value of making wrong conclusions. The link between sample size, power and error probabilities is well-researched in the NHST framework. In Bayesian statistics research this relationship is less discussed and the value of power and unconditional error probabilities are debated. Chapter 2 presents four sample size determination methods for informative hypothesis testing by means of Bayes factors. The value of power and (un)conditional error probabilities and their link with sample size for Bayesian hypothesis tests are discussed. Another step in the research cycle is to translate the results from a statistical analysis into a conclusion. The analysis should match the research question to provide a sensible conclusion. Many hypothesis tests concern the presence and direction of *population* effects. However, in practice the conclusions from these hypothesis tests often are at the *individual* level. For example, after analyzing the effectiveness of a medication in the population, it is prescribed to individuals. The average effect does not imply the medicine works for all individuals. In many situations the main interest is in the individual effects rather than population effects. Chapters 3 and 4 describe how Bayesian hypothesis testing can be used to synthesize the results from multiple individual analyses. Bayesian statistics can be used to continuously add data and sequentially update knowledge about population effects. This process is called updating. Alternatively, data from multiple individuals can be analyzed separately and combined to learn about how the homogeneity (similarity) of individual effects. Chapter 3 presents the methodology and Chapter 4 is a hands-on description for how to execute such an analysis. For Chapter 2 an R package has been developed, and for Chapter 3 an R Shiny application has been developed. Both pieces of software are presented in Chapter 6. Chapter 5 discusses the updating cycle in Bayesian statistics and focuses on the starting point of an updating cycle. The information in a Bayes factor is useful to describe how we can update our knowledge. However, knowing the rate with which the relative belief for two hypotheses changes is meaningless if the starting point is unknown. Chapter 5 therefore discusses the importance of prior probabilities and how to specify these for a set of hypotheses. Chapters 7 and 8 present applied research where informative hypotheses are tested with Bayes factors. These are examples of research that commonly are analyzed with NHST and are thus exemplary in what the possibilities with informative hypothesis testing are. In Chapter 7 informative hypotheses are formulated to analyze the data from a repeated measures experiment. Chapter 8 evaluates the presence of a mediated effect at the the individual level by means of Bayesian informative hypothess tests

    Capturing Ordinal Theoretical Constraint in Psychological Science

    No full text
    Most theories in the social sciences are verbal and provide ordinal-level predictions for data. For example, a theory might predict that performance is better in one condition than another, but not by how much. One way of gaining additional specificity is to posit many ordinal constraints that hold simultaneously. For example a theory might predict an effect in one condition, a larger effect in another, and none in a third. We show how common theoretical positions naturally lead to multiple ordinal constraints. To assess whether multiple ordinal constraints hold in data, we adopt a Bayesian model comparison approach. The result is an inferential system that is custom-tuned for the way social scientists conceptualize theory, and that is more intuitive and informative than current linear-model approaches

    Bayes factor vs. Posterior-Predictive Model Assessment: Insights from Ordinal Constraints

    No full text
    A central element of statistical inference is good model specification where researchers specify models that capture differing theoretical position. We argue that methods of inference forcing researchers to use models that may not be appropriate for their research question are not as desirable as methods with no such constraints. We ask how posterior-predictive model assessment methods such as wAIC and LOO-CV perform when theoretical positions correspond to different space restrictions on a common parameter space. One of the main theoretical relations is nesting — where the parameter space of one model is a subset of that for another. A good example is a general model that admits any set of preferences; a nested model is one that admits only preferences that obey transitivity. We find that posterior-predictive methods fail in these cases: More constrained models are not favored even when data are compatible with the constraint. Researchers who use posterior predictive methods are forced to partition the parameter space into non-overlapping subspaces, even if these subspaces have no theoretical interpretation. Fortunately, Bayes factor model comparison accommodates overlapping models without such difficulties. We argue given that posterior predictive approaches force certain specifications that may not be ideal for scientific questions, they are less desirable in many contexts

    All for one or some for all? Evaluating informative hypotheses using multiple N = 1 studies

    No full text
    Analyses are mostly executed at the population level, whereas in many applications the interest is on the individual level instead of the population level. In this paper, multiple N = 1 experiments are considered, where participants perform multiple trials with a dichotomous outcome in various conditions. Expectations with respect to the performance of participants can be translated into so-called informative hypotheses. These hypotheses can be evaluated for each participant separately using Bayes factors. A Bayes factor expresses the relative evidence for two hypotheses based on the data of one individual. This paper proposes to “average” these individual Bayes factors in the gP-BF, the average relative evidence. The gP-BF can be used to determine whether one hypothesis is preferred over another for all individuals under investigation. This measure provides insight into whether the relative preference of a hypothesis from a pre-defined set is homogeneous over individuals. Two additional measures are proposed to support the interpretation of the gP-BF: the evidence rate (ER), the proportion of individual Bayes factors that support the same hypothesis as the gP-BF, and the stability rate (SR), the proportion of individual Bayes factors that express a stronger support than the gP-BF. These three statistics can be used to determine the relative support in the data for the informative hypotheses entertained. Software is available that can be used to execute the approach proposed in this paper and to determine the sensitivity of the outcomes with respect to the number of participants and within condition replications

    All for one or some for all?: Evaluating informative hypotheses using multiple N = 1 studies

    No full text
    Analyses are mostly executed at the population level, whereas in many applications the interest is on the individual level instead of the population level. In this paper, multiple N = 1 experiments are considered, where participants perform multiple trials with a dichotomous outcome in various conditions. Expectations with respect to the performance of participants can be translated into so-called informative hypotheses. These hypotheses can be evaluated for each participant separately using Bayes factors. A Bayes factor expresses the relative evidence for two hypotheses based on the data of one individual. This paper proposes to "average" these individual Bayes factors in the gP-BF, the average relative evidence. The gP-BF can be used to determine whether one hypothesis is preferred over another for all individuals under investigation. This measure provides insight into whether the relative preference of a hypothesis from a pre-defined set is homogeneous over individuals. Two additional measures are proposed to support the interpretation of the gP-BF: the evidence rate (ER), the proportion of individual Bayes factors that support the same hypothesis as the gP-BF, and the stability rate (SR), the proportion of individual Bayes factors that express a stronger support than the gP-BF. These three statistics can be used to determine the relative support in the data for the informative hypotheses entertained. Software is available that can be used to execute the approach proposed in this paper and to determine the sensitivity of the outcomes with respect to the number of participants and within condition replications

    All for one or some for all? : Evaluating informative hypotheses using multiple N = 1 studies

    No full text
    Analyses are mostly executed at the population level, whereas in many applications the interest is on the individual level instead of the population level. In this paper, multiple N = 1 experiments are considered, where participants perform multiple trials with a dichotomous outcome in various conditions. Expectations with respect to the performance of participants can be translated into so-called informative hypotheses. These hypotheses can be evaluated for each participant separately using Bayes factors. A Bayes factor expresses the relative evidence for two hypotheses based on the data of one individual. This paper proposes to "average" these individual Bayes factors in the gP-BF, the average relative evidence. The gP-BF can be used to determine whether one hypothesis is preferred over another for all individuals under investigation. This measure provides insight into whether the relative preference of a hypothesis from a pre-defined set is homogeneous over individuals. Two additional measures are proposed to support the interpretation of the gP-BF: the evidence rate (ER), the proportion of individual Bayes factors that support the same hypothesis as the gP-BF, and the stability rate (SR), the proportion of individual Bayes factors that express a stronger support than the gP-BF. These three statistics can be used to determine the relative support in the data for the informative hypotheses entertained. Software is available that can be used to execute the approach proposed in this paper and to determine the sensitivity of the outcomes with respect to the number of participants and within condition replications
    corecore