455 research outputs found

    Negations in syllogistic reasoning: Evidence for a heuristic–analytic conflict

    Get PDF
    An experiment utilizing response time measures was conducted to test dominant processing strategies in syllogistic reasoning with the expanded quantifier set proposed by Roberts (2005). Through adding negations to existing quantifiers it is possible to change problem surface features without altering logical validity. Biases based on surface features such as atmosphere, matching, and the probability heuristics model (PHM; Chater & Oaksford, 1999; Wetherick & Gilhooly, 1995) would not be expected to show variance in response latencies, but participant responses should be highly sensitive to changes in the surface features of the quantifiers. In contrast, according to analytic accounts such as mental models theory and mental logic (e.g., Johnson-Laird & Byrne, 1991; Rips, 1994) participants should exhibit increased response times for negated premises, but not be overly impacted upon by the surface features of the conclusion. Data indicated that the dominant response strategy was based on a matching heuristic, but also provided evidence of a resource-demanding analytic procedure for dealing with double negatives. The authors propose that dual-process theories offer a stronger account of these data whereby participants employ competing heuristic and analytic strategies and fall back on a heuristic response when analytic processing fails

    Employing a latent variable framework to improve efficiency in composite endpoint analysis.

    Get PDF
    Composite endpoints that combine multiple outcomes on different scales are common in clinical trials, particularly in chronic conditions. In many of these cases, patients will have to cross a predefined responder threshold in each of the outcomes to be classed as a responder overall. One instance of this occurs in systemic lupus erythematosus, where the responder endpoint combines two continuous, one ordinal and one binary measure. The overall binary responder endpoint is typically analysed using logistic regression, resulting in a substantial loss of information. We propose a latent variable model for the systemic lupus erythematosus endpoint, which assumes that the discrete outcomes are manifestations of latent continuous measures and can proceed to jointly model the components of the composite. We perform a simulation study and find that the method offers large efficiency gains over the standard analysis, the magnitude of which is highly dependent on the components driving response. Bias is introduced when joint normality assumptions are not satisfied, which we correct for using a bootstrap procedure. The method is applied to the Phase IIb MUSE trial in patients with moderate to severe systemic lupus erythematosus. We show that it estimates the treatment effect 2.5 times more precisely, offering a 60% reduction in required sample size

    To add or not to add a new treatment arm to a multiarm study: A decision-theoretic framework.

    Get PDF
    Multiarm clinical trials, which compare several experimental treatments against control, are frequently recommended due to their efficiency gain. In practise, all potential treatments may not be ready to be tested in a phase II/III trial at the same time. It has become appealing to allow new treatment arms to be added into on-going clinical trials using a "platform" trial approach. To the best of our knowledge, many aspects of when to add arms to an existing trial have not been explored in the literature. Most works on adding arm(s) assume that a new arm is opened whenever a new treatment becomes available. This strategy may prolong the overall duration of a study or cause reduction in marginal power for each hypothesis if the adaptation is not well accommodated. Within a two-stage trial setting, we propose a decision-theoretic framework to investigate when to add or not to add a new treatment arm based on the observed stage one treatment responses. To account for different prospect of multiarm studies, we define utility in two different ways; one for a trial that aims to maximise the number of rejected hypotheses; the other for a trial that would declare a success when at least one hypothesis is rejected from the study. Our framework shows that it is not always optimal to add a new treatment arm to an existing trial. We illustrate a case study by considering a completed trial on knee osteoarthritis

    Explaining Evidence Denial as Motivated Pragmatically Rational Epistemic Irrationality

    Get PDF
    This paper introduces a model for evidence denial that explains this behavior as a manifestation of rationality and it is based on the contention that social values (measurable as utilities) often underwrite these sorts of responses. Moreover, it is contended that the value associated with group membership in particular can override epistemic reason when the expected utility of a belief or belief system is great. However, it is also true that it appears to be the case that it is still possible for such unreasonable believers to reverse this sort of dogmatism and to change their beliefs in a way that is epistemically rational. The conjecture made here is that we should expect this to happen only when the expected utility of the beliefs in question dips below a threshold where the utility value of continued dogmatism and the associated group membership is no longer sufficient to motivate defusing the counter-evidence that tells against such epistemically irrational beliefs

    Prevention and management of chronic disease in Aboriginal and Islander Community Controlled Health Services in Queensland: a quality improvement study assessing change in selected clinical performance indicators over time in a cohort of services

    No full text
    OBJECTIVE To evaluate clinical healthcare performance in Aboriginal Medical Services in Queensland and to consider future directions in supporting improvement through measurement, target setting and standards development. DESIGN Longitudinal study assessing baseline performance and improvements in service delivery, clinical care and selected outcomes against key performance indicators 2009-2010. SETTING 27 Aboriginal and Islander Community Controlled Health Services (AICCHSs) in Queensland, who are members of the Queensland Aboriginal and Islander Health Council (QAIHC). PARTICIPANTS 22 AICCHS with medical clinics. INTERVENTION Implementation and use of an electronic clinical information system that integrates with electronic health records supported by the QAIHC quality improvement programme-the Close the Gap Collaborative. MAIN OUTCOME MEASURES Proportion of patients with current recording of key healthcare activities and the prevalence of risk factors and chronic disease. RESULTS Aggregated performance was high on a number of key risk factors and healthcare activities including assessment of tobacco use and management of hypertension but low for others. Performance between services showed greatest variation for care planning and health check activity. CONCLUSIONS Data collected by the QAIHC health information system highlight the risk factor workload facing the AICCHS in Queensland, demonstrating the need for ongoing support and workforce planning. Development of targets and weighting models is necessary to enable robust between-service comparisons of performance, which has implications for health reform initiatives in Australia. The limited information available suggests that although performance on key activities in the AICCHS sector has potential for improvement in some areas, it is nonetheless at a higher level than for mainstream providers. IMPLICATIONS The work demonstrates the role that the Community Controlled sector can play in closing the gap in Aboriginal and Torres Strait Islander health outcomes by leading the use of clinical data to record and assess the quality of services and health outcome.Office of Aboriginal and Torres Strait Islander Health, Department of Health and Ageing, Canberra, ACT, Australia

    Sample size estimation using a latent variable model for mixed outcome co-primary, multiple primary and composite endpoints.

    Get PDF
    Mixed outcome endpoints that combine multiple continuous and discrete components are often employed as primary outcome measures in clinical trials. These may be in the form of co-primary endpoints, which conclude effectiveness overall if an effect occurs in all of the components, or multiple primary endpoints, which require an effect in at least one of the components. Alternatively, they may be combined to form composite endpoints, which reduce the outcomes to a one-dimensional endpoint. There are many advantages to joint modeling the individual outcomes, however in order to do this in practice we require techniques for sample size estimation. In this article we show how the latent variable model can be used to estimate the joint endpoints and propose hypotheses, power calculations and sample size estimation methods for each. We illustrate the techniques using a numerical example based on a four-dimensional endpoint and find that the sample size required for the co-primary endpoint is larger than that required for the individual endpoint with the smallest effect size. Conversely, the sample size required in the multiple primary case is similar to that needed for the outcome with the largest effect size. We show that the empirical power is achieved for each endpoint and that the FWER can be sufficiently controlled using a Bonferroni correction if the correlations between endpoints are less than 0.5. Otherwise, less conservative adjustments may be needed. We further illustrate empirically the efficiency gains that may be achieved in the composite endpoint setting

    The Search for Invariance: Repeated Positive Testing Serves the Goals of Causal Learning

    Get PDF
    Positive testing is characteristic of exploratory behavior, yet it seems to be at odds with the aim of information seeking. After all, repeated demonstrations of one’s current hypothesis often produce the same evidence and fail to distinguish it from potential alternatives. Research on the development of scientific reasoning and adult rule learning have both documented and attempted to explain this behavior. The current chapter reviews this prior work and introduces a novel theoretical account—the Search for Invariance (SI) hypothesis—which suggests that producing multiple positive examples serves the goals of causal learning. This hypothesis draws on the interventionist framework of causal reasoning, which suggests that causal learners are concerned with the invariance of candidate hypotheses. In a probabilistic and interdependent causal world, our primary goal is to determine whether, and in what contexts, our causal hypotheses provide accurate foundations for inference and intervention—not to disconfirm their alternatives. By recognizing the central role of invariance in causal learning, the phenomenon of positive testing may be reinterpreted as a rational information-seeking strategy
    • 

    corecore