28 research outputs found

    Abundant variation in microsatellites of the parasitic nematode Trichostrongylus tenuis and linkage to a tandem repeat

    Get PDF
    An understanding of how genes move between and within populations of parasitic nematodes is important in combating the evolution and spread of anthelmintic resistance. Much has been learned by studying mitochondrial DNA markers, but autosomal markers such as microsatellites have been applied to only a few nematode species, despite their many advantages for studying gene flow in eukaryotes. Here, we describe the isolation of 307 microsatellites from Trichostrongylus tenuis, an intestinal nematode of red grouse. High levels of variation were revealed at sixteen microsatellite loci (including three sex-lined loci) in 111 male T. tenuis nematodes collected from four hosts at a single grouse estate in Scotland (average He = 0.708; mean number of alleles = 12.2). A population genetic analysis detected no deviation from panmixia either between (F(ST) = 0.00) or within hosts (F(IS) = 0.015). We discuss the feasibility of developing microsatellites in parasitic nematodes and the problem of null alleles. We also describe a novel 146-bp repeat element, TteREP1, which is linked to two-thirds of the microsatellites sequenced and is associated with marker development failure. The sequence of TteREP1 is related to the TcREP-class of repeats found in several other trichostrongyloid species including Trichostrongylus colubriformis and Haemonchus contortus

    Vegetation structure and fire weather influence variation in burn severity and fuel consumption during peatland wildfires

    Get PDF
    Temperate peatland wildfires are of significant environmental concern but information on their environmental effects is lacking. We assessed variation in burn severity and fuel consumption within and between wildfires that burnt British moorlands in 2011 and 2012. We adapted the composite burn index (pCBI) to provide semi-quantitative estimates of burn severity. Pre- and post-fire surface (shrubs and graminoids) and ground (litter, moss, duff) fuel loads associated with large wildfires were assessed using destructive sampling and analysed using a generalised linear mixed model (GLMM). Consumption during wildfires was compared with published estimates of consumption during prescribed burns. Burn severity and fuel consumption were related to fire weather, assessed using the Canadian Fire Weather Index System (FWI System), and pre-fire vegetation type. pCBI varied 1.6 fold between, and up to 1.7 fold within, wildfires. pCBI was higher where moisture codes of the FWI System indicated drier fuels. Spatial variation in pre- and post-fire fuel load accounted for a substantial proportion of the variance in fuel loads. Average surface fuel consumption was a linear function of pre-fire fuel load. Average ground fuel combustion completeness could be predicted by the Buildup Index. Carbon release ranged between 0.36 and 1.00 kg C m−2. The flammability of ground fuel layers may explain the higher C release-rates seen for wildfires in comparison to prescribed burns. Drier moorland community types appear to be at greater risk of severe burns than blanket-bog communities

    Can we quantify harm in general practice records? An assessment of precision and power using computer simulation

    Get PDF
    <b>Background</b> Estimating harm rates for specific patient populations and detecting significant changes in them over time are essential if patient safety in general practice is to be improved. Clinical record review (CRR) is arguably the most suitable method for these purposes, but the optimal values and combinations of its parameters (such as numbers of records and practices) remain unknown. Our aims were to: 1. Determine and quantify CRR parameters; 2. Assess the precision and power of feasible CRR scenarios; and 3. Quantify the minimum requirements for adequate precision and acceptable power.<p></p> <b>Method</b> We explored precision and power of CRR scenarios using Monte Carlo simulation. A range of parameter values were combined in 864 different CRR scenarios, 1000 random data sets were generated for each, and harm rates were estimated and tested for change over time by fitting a generalised linear model with a Poisson response.<p></p> <b>Results</b> CRR scenarios with ≥100 detected harm incidents had harm rate estimates with acceptable precision. Harm reductions of 20% or ≥50% were detected with adequate power by those CRR scenarios with at least 100 and 500 harm incidents respectively. The number of detected harm incidents was dependent on the baseline harm rate multiplied by: the period of time reviewed in each record; number of records reviewed per practice; number of practices who reviewed records; and the number of times each record was reviewed.<p></p> <b>Conclusion</b> We developed a simple formula to calculate the minimum values of CRR parameters required to achieve adequate precision and acceptable power when monitoring harm rates. Our findings have practical implications for health care decision-makers, leaders and researchers aiming to measure and reduce harm at regional or national level

    Virus-virus interactions impact the population dynamics of influenza and the common cold

    Get PDF
    The human respiratory tract hosts a diverse community of cocirculating viruses that are responsible for acute respiratory infections. This shared niche provides the opportunity for virus–virus interactions which have the potential to affect individual infection risks and in turn influence dynamics of infection at population scales. However, quantitative evidence for interactions has lacked suitable data and appropriate analytical tools. Here, we expose and quantify interactions among respiratory viruses using bespoke analyses of infection time series at the population scale and coinfections at the individual host scale. We analyzed diagnostic data from 44,230 cases of respiratory illness that were tested for 11 taxonomically broad groups of respiratory viruses over 9 y. Key to our analyses was accounting for alternative drivers of correlated infection frequency, such as age and seasonal dependencies in infection risk, allowing us to obtain strong support for the existence of negative interactions between influenza and noninfluenza viruses and positive interactions among noninfluenza viruses. In mathematical simulations that mimic 2-pathogen dynamics, we show that transient immune-mediated interference can cause a relatively ubiquitous common cold-like virus to diminish during peak activity of a seasonal virus, supporting the potential role of innate immunity in driving the asynchronous circulation of influenza A and rhinovirus. These findings have important implications for understanding the linked epidemiological dynamics of viral respiratory infections, an important step towards improved accuracy of disease forecasting models and evaluation of disease control interventions

    Analysing livestock network data for infectious disease control: an argument for routine data collection in emerging economies

    Get PDF
    Livestock movements are an important mechanism of infectious disease transmission. Where these are well recorded, network analysis tools have been used to successfully identify system properties, highlight vulnerabilities to transmission, and inform targeted surveillance and control. Here we highlight the main uses of network properties in understanding livestock disease epidemiology and discuss statistical approaches to infer network characteristics from biased or fragmented datasets. We use a ‘hurdle model’ approach that predicts (i) the probability of movement and (ii) the number of livestock moved to generate synthetic ‘complete’ networks of movements between administrative wards, exploiting routinely collected government movement permit data from northern Tanzania. We demonstrate that this model captures a significant amount of the observed variation. Combining the cattle movement network with a spatial between-ward contact layer, we create a multiplex, over which we simulated the spread of ‘fast’ (R0 = 3) and ‘slow’ (R0 = 1.5) pathogens, and assess the effects of random versus targeted disease control interventions (vaccination and movement ban). The targeted interventions substantially outperform those randomly implemented for both fast and slow pathogens. Our findings provide motivation to encourage routine collection and centralization of movement data to construct representative networks. This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’. This theme issue is linked with the earlier issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’

    Software for quantifying and simulating microsatellite genotyping error

    No full text
    Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input

    Maximum-likelihood estimation of allelic dropout and false allele error rates from microsatellite genotypes in the absence of reference data

    No full text
    The importance of quantifying and accounting for stochastic genotyping errors when analyzing microsatellite data is increasingly being recognized. This awareness is motivating the development of data analysis methods that not only take errors into consideration but also recognize the difference between two distinct classes of error, allelic dropout and false alleles. Currently methods to estimate rates of allelic dropout and false alleles depend upon the availability of error-free reference genotypes or reliable pedigree data, which are often not available. We have developed a maximum-likelihood-based method for estimating these error rates from a single replication of a sample of genotypes. Simulations show it to be both accurate and robust to modest violations of its underlying assumptions. We have applied the method to estimating error rates in two microsatellite data sets. It is implemented in a computer program, Pedant, which estimates allelic dropout and false allele error rates with 95% confidence regions from microsatellite genotype data and performs power analysis. Pedant is freely available at http://www.stats.gla.ac.uk/~paulj/pedant.html
    corecore