1,936 research outputs found

    Model-based cognitive neuroscience

    Get PDF
    This special issue explores the growing intersection between mathematical psychology and cognitive neuroscience. Mathematical psychology, and cognitive modeling more generally, has a rich history of formalizing and testing hypotheses about cognitive mechanisms within a mathematical and computational language, making exquisite predictions of how people perceive, learn, remember, and decide. Cognitive neuroscience aims to identify neural mechanisms associated with key aspects of cognition using techniques like neurophysiology, electrophysiology, and structural and functional brain imaging. These two come together in a powerful new approach called . model-based cognitive neuroscience, which can both inform cognitive modeling and help to interpret neural measures. Cognitive models decompose complex behavior into representations and processes and these latent model states can be used to explain the modulation of brain states under different experimental conditions. Reciprocally, neural measures provide data that help constrain cognitive models and adjudicate between competing cognitive models that make similar predictions about behavior. As examples, brain measures are related to cognitive model parameters fitted to individual participant data, measures of brain dynamics are related to measures of model dynamics, model parameters are constrained by neural measures, model parameters or model states are used in statistical analyses of neural data, or neural and behavioral data are analyzed jointly within a hierarchical modeling framework. We provide an introduction to the field of model-based cognitive neuroscience and to the articles contained within this special issue

    Adjusting for multiple prognostic factors in the analysis of randomised trials

    Get PDF
    Background: When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods: We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results: Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions: It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size

    A re-randomisation design for clinical trials

    Get PDF
    Background: Recruitment to clinical trials is often problematic, with many trials failing to recruit to their target sample size. As a result, patient care may be based on suboptimal evidence from underpowered trials or non-randomised studies. Methods: For many conditions patients will require treatment on several occasions, for example, to treat symptoms of an underlying chronic condition (such as migraines, where treatment is required each time a new episode occurs), or until they achieve treatment success (such as fertility, where patients undergo treatment on multiple occasions until they become pregnant). We describe a re-randomisation design for these scenarios, which allows each patient to be independently randomised on multiple occasions. We discuss the circumstances in which this design can be used. Results: The re-randomisation design will give asymptotically unbiased estimates of treatment effect and correct type I error rates under the following conditions: (a) patients are only re-randomised after the follow-up period from their previous randomisation is complete; (b) randomisations for the same patient are performed independently; and (c) the treatment effect is constant across all randomisations. Provided the analysis accounts for correlation between observations from the same patient, this design will typically have higher power than a parallel group trial with an equivalent number of observations. Conclusions: If used appropriately, the re-randomisation design can increase the recruitment rate for clinical trials while still providing an unbiased estimate of treatment effect and correct type I error rates. In many situations, it can increase the power compared to a parallel group design with an equivalent number of observations

    An unusual adrenal cause of hypoglycaemia

    Get PDF

    A comparison of methods to adjust for continuous covariates in the analysis of randomised trials

    Get PDF
    BACKGROUND: Although covariate adjustment in the analysis of randomised trials can be beneficial, adjustment for continuous covariates is complicated by the fact that the association between covariate and outcome must be specified. Misspecification of this association can lead to reduced power, and potentially incorrect conclusions regarding treatment efficacy. METHODS: We compared several methods of adjustment to determine which is best when the association between covariate and outcome is unknown. We assessed (a) dichotomisation or categorisation; (b) assuming a linear association with outcome; (c) using fractional polynomials with one (FP1) or two (FP2) polynomial terms; and (d) using restricted cubic splines with 3 or 5 knots. We evaluated each method using simulation and through a re-analysis of trial datasets. RESULTS: Methods which kept covariates as continuous typically had higher power than methods which used categorisation. Dichotomisation, categorisation, and assuming a linear association all led to large reductions in power when the true association was non-linear. FP2 models and restricted cubic splines with 3 or 5 knots performed best overall. CONCLUSIONS: For the analysis of randomised trials we recommend (1) adjusting for continuous covariates even if their association with outcome is unknown; (2) keeping covariates as continuous; and (3) using fractional polynomials with two polynomial terms or restricted cubic splines with 3 to 5 knots when a linear association is in doubt

    Assessing potential sources of clustering in individually randomised trials

    Get PDF
    Recent reviews have shown that while clustering is extremely common in individually randomised trials (for example, clustering within centre, therapist, or surgeon), it is rarely accounted for in the trial analysis. Our aim is to develop a general framework for assessing whether potential sources of clustering must be accounted for in the trial analysis to obtain valid type I error rates (non-ignorable clustering), with a particular focus on individually randomised trials

    A novel real-world ecotoxicological dataset of pelagic microbial community responses to wastewater.

    Full text link
    Real-world observational datasets that record and quantify pressure-stressor-response linkages between effluent discharges and natural aquatic systems are rare. With global wastewater volumes increasing at unprecedented rates, it is urgent that the present dataset is available to provide the necessary information about microbial community structure and functioning. Field studies were performed at two time-points in the Austral summer. Single-species and microbial community whole effluent toxicity (WET) testing was performed at a complete range of effluent concentrations and two salinities, with accompanying environmental data to provide new insights into nutrient and organic matter cycling, and to identify ecotoxicological tipping points. The two salinity regimes were chosen to investigate future scenarios based on a predicted salinity increase at the study site, typical of coastal regions with rising sea levels globally. Flow cytometry, amplicon sequencing of 16S and 18S rRNA genes and micro-fluidic quantitative polymerase-chain reactions (MFQPCR) were used to determine chlorophyll-a and total bacterial cell numbers and size, as well as taxonomic and functional diversity of pelagic microbial communities. This strong pilot dataset could be replicated in other regions globally and would be of high value to scientists and engineers to support the next advances in microbial ecotoxicology, environmental biomonitoring and estuarine water quality modelling

    Agreement was moderate between data-based and opinion-based assessments of biases affecting randomised trials within meta-analyses

    Get PDF
    BACKGROUND: Randomised trials included in meta-analyses are often affected by bias caused by methodological flaws or limitations, but the degree of bias is unknown. Two proposed methods adjust trial results for bias using: (1) empirical evidence from published meta-epidemiological studies; or (2) expert opinion. METHODS: We investigated agreement between data-based and opinion-based approaches to assessing bias in each of four domains: sequence generation, allocation concealment, blinding and incomplete outcome data. From each sampled meta-analysis, a pair of trials with the highest and lowest empirical model-based bias estimates was selected. Independent assessors were asked which trial within each pair was judged more biased on the basis of detailed trial design summaries. RESULTS: Assessors judged trials to be equally biased in 68% of pairs evaluated. When assessors judged one trial as more biased, the proportion of judgements agreeing with the model-based ranking was highest for allocation concealment (79%) and blinding (79%) and lower for sequence generation (59%) and incomplete outcome data (56%). CONCLUSIONS: Most trial pairs found to be discrepant empirically were judged to be equally biased by assessors. We found moderate agreement between opinion and data-based evidence in pairs where assessors ranked one trial as more biased

    Neurospora from natural populations: Population genomics insights into the Life history of a model microbial Eukaryote

    Get PDF
    The ascomycete filamentous fungus Neurospora crassa played a historic role in experimental biology and became a model system for genetic research. Stimulated by a systematic effort to collect wild strains initiated by Stanford geneticist David Perkins, the genus Neurospora has also become a basic model for the study of evolutionary processes, speciation, and population biology. In this chapter, we will first trace the history that brought Neurospora into the era of population genomics. We will then cover the major contributions of population genomic investigations using Neurospora to our understanding of microbial biogeography and speciation, and review recent work using population genomics and genome-wide association mapping that illustrates the unique potential of Neurospora as a model for identifying the genetic basis of (potentially adaptive) phenotypes in filamentous fungi. The advent of population genomics has contributed to firmly establish Neurospora as a complete model system and we hope our review will entice biologists to include Neurospora in their research
    • …
    corecore