621 research outputs found

    Comparison of whole-genome bisulfite sequencing library preparation strategies identifies sources of biases affecting DNA methylation data.

    Get PDF
    BACKGROUND: Whole-genome bisulfite sequencing (WGBS) is becoming an increasingly accessible technique, used widely for both fundamental and disease-oriented research. Library preparation methods benefit from a variety of available kits, polymerases and bisulfite conversion protocols. Although some steps in the procedure, such as PCR amplification, are known to introduce biases, a systematic evaluation of biases in WGBS strategies is missing. RESULTS: We perform a comparative analysis of several commonly used pre- and post-bisulfite WGBS library preparation protocols for their performance and quality of sequencing outputs. Our results show that bisulfite conversion per se is the main trigger of pronounced sequencing biases, and PCR amplification builds on these underlying artefacts. The majority of standard library preparation methods yield a significantly biased sequence output and overestimate global methylation. Importantly, both absolute and relative methylation levels at specific genomic regions vary substantially between methods, with clear implications for DNA methylation studies. CONCLUSIONS: We show that amplification-free library preparation is the least biased approach for WGBS. In protocols with amplification, the choice of bisulfite conversion protocol or polymerase can significantly minimize artefacts. To aid with the quality assessment of existing WGBS datasets, we have integrated a bias diagnostic tool in the Bismark package and offer several approaches for consideration during the preparation and analysis of WGBS datasets.This work was supported by the Biotechnology and Biological Sciences Research Council (CASE studentship to N.O., BB/K010867/1 to W.R.), Wellcome Trust (095645/Z/11/Z to W.R.), EU EpiGeneSys (257082 to W.R.) and EU BLUEPRINT (282510 to W.R.); Babraham Institute/Cambridge European Trust scholarship to N.O.; M.R.B. is a Sir Henry Dale Fellow (101225/Z/ 13/Z), jointly funded by the Wellcome Trust and the Royal Society

    Describing the longitudinal course of major depression using Markov models: Data integration across three national surveys

    Get PDF
    BACKGROUND: Most epidemiological studies of major depression report period prevalence estimates. These are of limited utility in characterizing the longitudinal epidemiology of this condition. Markov models provide a methodological framework for increasing the utility of epidemiological data. Markov models relating incidence and recovery to major depression prevalence have been described in a series of prior papers. In this paper, the models are extended to describe the longitudinal course of the disorder. METHODS: Data from three national surveys conducted by the Canadian national statistical agency (Statistics Canada) were used in this analysis. These data were integrated using a Markov model. Incidence, recurrence and recovery were represented as weekly transition probabilities. Model parameters were calibrated to the survey estimates. RESULTS: The population was divided into three categories: low, moderate and high recurrence groups. The size of each category was approximated using lifetime data from a study using the WHO Mental Health Composite International Diagnostic Interview (WMH-CIDI). Consistent with previous work, transition probabilities reflecting recovery were high in the initial weeks of the episodes, and declined by a fixed proportion with each passing week. CONCLUSION: Markov models provide a framework for integrating psychiatric epidemiological data. Previous studies have illustrated the utility of Markov models for decomposing prevalence into its various determinants: incidence, recovery and mortality. This study extends the Markov approach by distinguishing several recurrence categories

    Distinct Functions of Period2 and Period3 in the Mouse Circadian System Revealed by In Vitro Analysis

    Get PDF
    The mammalian circadian system, which is composed of a master pacemaker in the suprachiasmatic nuclei (SCN) as well as other oscillators in the brain and peripheral tissues, controls daily rhythms of behavior and physiology. Lesions of the SCN abolish circadian rhythms of locomotor activity and transplants of fetal SCN tissue restore rhythmic behavior with the periodicity of the donor's genotype, suggesting that the SCN determines the period of the circadian behavioral rhythm. According to the model of timekeeping in the SCN, the Period (Per) genes are important elements of the transcriptional/translational feedback loops that generate the endogenous circadian rhythm. Previous studies have investigated the functions of the Per genes by examining locomotor activity in mice lacking functional PERIOD proteins. Variable behavioral phenotypes were observed depending on the line and genetic background of the mice. In the current study we assessed both wheel-running activity and Per1-promoter-driven luciferase expression (Per1-luc) in cultured SCN, pituitary, and lung explants from Per2−/− and Per3−/− mice congenic with the C57BL/6J strain. We found that the Per2−/− phenotype is enhanced in vitro compared to in vivo, such that the period of Per1-luc expression in Per2−/− SCN explants is 1.5 hours shorter than in Per2+/+ SCN, while the free-running period of wheel-running activity is only 11 minutes shorter in Per2−/− compared to Per2+/+ mice. In contrast, circadian rhythms in SCN explants from Per3−/− mice do not differ from Per3+/+ mice. Instead, the period and phase of Per1-luc expression are significantly altered in Per3−/− pituitary and lung explants compared to Per3+/+ mice. Taken together these data suggest that the function of each Per gene may differ between tissues. Per2 appears to be important for period determination in the SCN, while Per3 participates in timekeeping in the pituitary and lung

    How victim age affects the context and timing of child sexual abuse: applying the routine activities approach to the first sexual abuse incident

    Get PDF
    The aim of this study was to examine from the routine activities approach how victim age might help to explain the timing, context and nature of offenders’ first known contact sexual abuse incident. One-hundred adult male child sexual abusers (M = 45.8 years, SD = 12.2; range = 20–84) were surveyed about the first time they had sexual contact with a child. Afternoon and early evening (between 3 pm and 9 pm) was the most common time in which sexual contact first occurred. Most incidents occurred in a home. Two-thirds of incidents occurred when another person was in close proximity, usually elsewhere in the home. Older victims were more likely to be sexually abused by someone outside their families and in the later hours of the day compared to younger victims. Proximity of another person (adult and/or child) appeared to have little effect on offenders’ decisions to abuse, although it had some impact on the level of intrusion and duration of these incidents. Overall, the findings lend support to the application of the routine activities approach for considering how contextual risk factors (i.e., the timing and relationship context) change as children age, and raise questions about how to best conceptualize guardianship in the context of child sexual abuse. These factors should be key considerations when devising and implementing sexual abuse prevention strategies and for informing theory development

    Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems

    Get PDF
    A generic mechanism - networked buffering - is proposed for the generation of robust traits in complex systems. It requires two basic conditions to be satisfied: 1) agents are versatile enough to perform more than one single functional role within a system and 2) agents are degenerate, i.e. there exists partial overlap in the functional capabilities of agents. Given these prerequisites, degenerate systems can readily produce a distributed systemic response to local perturbations. Reciprocally, excess resources related to a single function can indirectly support multiple unrelated functions within a degenerate system. In models of genome:proteome mappings for which localized decision-making and modularity of genetic functions are assumed, we verify that such distributed compensatory effects cause enhanced robustness of system traits. The conditions needed for networked buffering to occur are neither demanding nor rare, supporting the conjecture that degeneracy may fundamentally underpin distributed robustness within several biotic and abiotic systems. For instance, networked buffering offers new insights into systems engineering and planning activities that occur under high uncertainty. It may also help explain recent developments in understanding the origins of resilience within complex ecosystems. \ud \u

    Origin of acidic surface waters and the evolution of atmospheric chemistry on early Mars

    Get PDF
    Observations from in situ experiments and planetary orbiters have shown that the sedimentary rocks found at Meridiani Planum, Mars were formed in the presence of acidic surface waters. The water was thought to be brought to the surface by groundwater upwelling, and may represent the last vestiges of the widespread occurrence of liquid water on Mars. However, it is unclear why the surface waters were acidic. Here we use geochemical calculations, constrained by chemical and mineralogical data from the Mars Exploration Rover Opportunity, to show that Fe oxidation and the precipitation of oxidized iron (Fe^(3+)) minerals generate excess acid with respect to the amount of base anions available in the rocks present in outcrop. We suggest that subsurface waters of near-neutral pH and rich in Fe^(2+) were rapidly acidified as iron was oxidized on exposure to O_2 or photo-oxidized by ultraviolet radiation at the martian surface. Temporal variation in surface acidity would have been controlled by the availability of liquid water, and as such, low-pH fluids could be a natural consequence of the aridification of the martian surface. Finally, because iron oxidation at Meridiani would have generated large amounts of gaseous H_2, ultimately derived from the reduction of H_2O, we conclude that surface geochemical processes would have affected the redox state of the early martian atmosphere

    Simulation studies of age-specific lifetime major depression prevalence

    Get PDF
    BACKGROUND: The lifetime prevalence (LTP) of Major Depressive Disorder (MDD) is the proportion of a population having met criteria for MDD during their life up to the time of assessment. Expectation holds that LTP should increase with age, but this has not usually been observed. Instead, LTP typically increases in the teenage years and twenties, stabilizes in adulthood and then begins to decline in middle age. Proposed explanations for this pattern include: a cohort effect (increasing incidence in more recent birth cohorts), recall failure and/or differential mortality. Declining age-specific incidence may also play a role. METHODS: We used a simulation model to explore patterns of incidence, recall and mortality in relation to the observed pattern of LTP. Lifetime prevalence estimates from the 2002 Canadian Community Health Survey, Mental Health and Wellbeing (CCHS 1.2) were used for model validation and calibration. RESULTS: Incidence rates predicting realistic values for LTP in the 15-24 year age group (where mortality is unlikely to substantially influence prevalence) lead to excessive LTP later in life, given reasonable assumptions about mortality and recall failure. This suggests that (in the absence of cohort effects) incidence rates decline with age. Differential mortality may make a contribution to the prevalence pattern, but only in older age categories. Cohort effects can explain the observed pattern, but only if recent birth cohorts have a much higher (approximately 10-fold greater) risk and if incidence has increased with successive birth cohorts over the past 60-70 years. CONCLUSIONS: The pattern of lifetime prevalence observed in cross-sectional epidemiologic studies seems most plausibly explained by incidence that declines with age and where some respondents fail to recall past episodes. A cohort effect is not a necessary interpretation of the observed pattern of age-specific lifetime prevalence

    Non-fatal disease burden for subtypes of depressive disorder: population-based epidemiological study

    Get PDF
    Background: Major depression is the leading cause of non-fatal disease burden. Because major depression is not a homogeneous condition, this study estimated the non-fatal disease burden for mild, moderate and severe depression in both single episode and recurrent depression. All estimates were assessed from an individual and a population perspective and presented as unadjusted, raw estimates and as estimates adjusted for comorbidity. Methods: We used data from the first wave of the second Netherlands-Mental-Health-Survey-and-Incidence-Study (NEMESIS-2, n = 6646; single episode Diagnostic and Statistical Manual (DSM)-IV depression, n = 115; recurrent depression, n = 246). Disease burden from an individual perspective was assessed as 'disability weight * time spent in depression' for each person in the dataset. From a population perspective it was assessed as 'disability weight * time spent in depression *number of people affected'. The presence of mental disorders was assessed with the Composite International Diagnostic Interview (CIDI) 3.0. Results: Single depressive episodes emerged as a key driver of disease burden from an individual perspective. From a population perspective, recurrent depressions emerged as a key driver. These findings remained unaltered after adjusting for comorbidity. Conclusions: The burden of disease differs between the subtype of depression and depends much on the choice of perspective. The distinction between an individual and a population perspective may help to avoid misunderstandings between policy makers and clinicians. © 2016 Biesheuvel-Leliefeld et al

    The course of untreated anxiety and depression, and determinants of poor one-year outcome: a one-year cohort study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Little is known about the course and outcome of untreated anxiety and depression in patients with and without a self-perceived need for care. The aim of the present study was to examine the one-year course of untreated anxiety and depression, and to determine predictors of a poor outcome.</p> <p>Method</p> <p>Baseline and one-year follow-up data were used of 594 primary care patients with current anxiety or depressive disorders at baseline (established by the Composite Interview Diagnostic Instrument (CIDI)), from the Netherlands Study of Depression and Anxiety (NESDA). Receipt of and need for care were assessed by the Perceived Need for Care Questionnaire (PNCQ).</p> <p>Results</p> <p>In depression, treated and untreated patients with a perceived treatment need showed more rapid symptom decline but greater symptom severity at follow-up than untreated patients without a self-perceived mental problem or treatment need. A lower education level, lower income, unemployment, loneliness, less social support, perceived need for care, number of somatic disorders, a comorbid anxiety and depressive disorder and symptom severity at baseline predicted a poorer outcome in both anxiety and depression. When all variables were considered at the same time, only baseline symptom severity appeared to predict a poorer outcome in anxiety. In depression, a poorer outcome was also predicted by more loneliness and a comorbid anxiety and depressive disorder.</p> <p>Conclusion</p> <p>In clinical practice, special attention should be paid to exploring the need for care among possible risk groups (e.g. low social economic status, low social support), and support them in making an informed decision on whether or not to seek treatment.</p

    Outcomes for depression and anxiety in primary care and details of treatment: a naturalistic longitudinal study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is little evidence as to whether or not guideline concordant care in general practice results in better clinical outcomes for people with anxiety and depression. This study aims to determine possible associations between guideline concordant care and clinical outcomes in general practice patients with depression and anxiety, and identify patient and treatment characteristics associated with clinical improvement.</p> <p>Methods</p> <p>This study forms part of the Netherlands Study of Depression and Anxiety (NESDA).</p> <p>Adult patients, recruited in general practice (67 GPs), were interviewed to assess DSM-IV diagnoses during baseline assessment of NESDA, and also completed questionnaires measuring symptom severity, received care, socio-demographic variables and social support both at baseline and 12 months later. The definition of guideline adherence was based on an algorithm on care received. Information on guideline adherence was obtained from GP medical records.</p> <p>Results</p> <p>721 patients with a current (6-month recency) anxiety or depressive disorder participated. While patients who received guideline concordant care (N = 281) suffered from more severe symptoms than patients who received non-guideline concordant care (N = 440), both groups showed equal improvement in their depressive or anxiety symptoms after 12 months. Patients who (still) had moderate or severe symptoms at follow-up, were more often unemployed, had smaller personal networks and more severe depressive symptoms at baseline than patients with mild symptoms at follow-up. The particular type of treatment followed made no difference to clinical outcomes.</p> <p>Conclusion</p> <p>The added value of guideline concordant care could not be demonstrated in this study. Symptom severity, employment status, social support and comorbidity of anxiety and depression all play a role in poor clinical outcomes.</p
    • …
    corecore