340 research outputs found

    Modelling the effects of glucagon during glucose tolerance testing.

    Get PDF
    From Europe PMC via Jisc Publications RouterHistory: ppub 2019-12-01, epub 2019-12-12Publication status: PublishedBACKGROUND:Glucose tolerance testing is a tool used to estimate glucose effectiveness and insulin sensitivity in diabetic patients. The importance of such tests has prompted the development and utilisation of mathematical models that describe glucose kinetics as a function of insulin activity. The hormone glucagon, also plays a fundamental role in systemic plasma glucose regulation and is secreted reciprocally to insulin, stimulating catabolic glucose utilisation. However, regulation of glucagon secretion by α-cells is impaired in type-1 and type-2 diabetes through pancreatic islet dysfunction. Despite this, inclusion of glucagon activity when modelling the glucose kinetics during glucose tolerance testing is often overlooked. This study presents two mathematical models of a glucose tolerance test that incorporate glucose-insulin-glucagon dynamics. The first model describes a non-linear relationship between glucagon and glucose, whereas the second model assumes a linear relationship. RESULTS:Both models are validated against insulin-modified and glucose infusion intravenous glucose tolerance test (IVGTT) data, as well as insulin infusion data, and are capable of estimating patient glucose effectiveness (sG) and insulin sensitivity (sI). Inclusion of glucagon dynamics proves to provide a more detailed representation of the metabolic portrait, enabling estimation of two new diagnostic parameters: glucagon effectiveness (sE) and glucagon sensitivity (δ). CONCLUSIONS:The models are used to investigate how different degrees of pax'tient glucagon sensitivity and effectiveness affect the concentration of blood glucose and plasma glucagon during IVGTT and insulin infusion tests, providing a platform from which the role of glucagon dynamics during a glucose tolerance test may be investigated and predicted

    LOCAS – A Low Coverage Assembly Tool for Resequencing Projects

    Get PDF
    Motivation: Next Generation Sequencing (NGS) is a frequently applied approach to detect sequence variations between highly related genomes. Recent large-scale re-sequencing studies as the Human 1000 Genomes Project utilize NGS data of low coverage to afford sequencing of hundreds of individuals. Here, SNPs and micro-indels can be detected by applying an alignment-consensus approach. However, computational methods capable of discovering other variations such as novel insertions or highly diverged sequence from low coverage NGS data are still lacking. Results: We present LOCAS, a new NGS assembler particularly designed for low coverage assembly of eukaryotic genomes using a mismatch sensitive overlap-layout-consensus approach. LOCAS assembles homologous regions in a homologyguided manner while it performs de novo assemblies of insertions and highly polymorphic target regions subsequently to an alignment-consensus approach. LOCAS has been evaluated in homology-guided assembly scenarios with low sequence coverage of Arabidopsis thaliana strains sequenced as part of the Arabidopsis 1001 Genomes Project. While assembling the same amount of long insertions as state-of-the-art NGS assemblers, LOCAS showed best results regarding contig size, error rate and runtime. Conclusion: LOCAS produces excellent results for homology-guided assembly of eukaryotic genomes with short reads and low sequencing depth, and therefore appears to be the assembly tool of choice for the detection of novel sequenc

    Effective Rheology of Bubbles Moving in a Capillary Tube

    Full text link
    We calculate the average volumetric flux versus pressure drop of bubbles moving in a single capillary tube with varying diameter, finding a square-root relation from mapping the flow equations onto that of a driven overdamped pendulum. The calculation is based on a derivation of the equation of motion of a bubble train from considering the capillary forces and the entropy production associated with the viscous flow. We also calculate the configurational probability of the positions of the bubbles.Comment: 4 pages, 1 figur

    Analysis of high-depth sequence data for studying viral diversity: a comparison of next generation sequencing platforms using Segminator II

    Get PDF
    Background: Next generation sequencing provides detailed insight into the variation present within viral populations, introducing the possibility of treatment strategies that are both reactive and predictive. Current software tools, however, need to be scaled up to accommodate for high-depth viral data sets, which are often temporally or spatially linked. In addition, due to the development of novel sequencing platforms and chemistries, each with implicit strengths and weaknesses, it will be helpful for researchers to be able to routinely compare and combine data sets from different platforms/chemistries. In particular, error associated with a specific sequencing process must be quantified so that true biological variation may be identified. Results: Segminator II was developed to allow for the efficient comparison of data sets derived from different sources. We demonstrate its usage by comparing large data sets from 12 influenza H1N1 samples sequenced on both the 454 Life Sciences and Illumina platforms, permitting quantification of platform error. For mismatches median error rates at 0.10 and 0.12%, respectively, suggested that both platforms performed similarly. For insertions and deletions median error rates within the 454 data (at 0.3 and 0.2%, respectively) were significantly higher than those within the Illumina data (0.004 and 0.006%, respectively). In agreement with previous observations these higher rates were strongly associated with homopolymeric stretches on the 454 platform. Outside of such regions both platforms had similar indel error profiles. Additionally, we apply our software to the identification of low frequency variants. Conclusion: We have demonstrated, using Segminator II, that it is possible to distinguish platform specific error from biological variation using data derived from two different platforms. We have used this approach to quantify the amount of error present within the 454 and Illumina platforms in relation to genomic location as well as location on the read. Given that next generation data is increasingly important in the analysis of drug-resistance and vaccine trials, this software will be useful to the pathogen research community. A zip file containing the source code and jar file is freely available for download from http://www.bioinf.manchester.ac.uk/segminator/

    Assembly complexity of prokaryotic genomes using short reads

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>De Bruijn graphs are a theoretical framework underlying several modern genome assembly programs, especially those that deal with very short reads. We describe an application of de Bruijn graphs to analyze the global repeat structure of prokaryotic genomes.</p> <p>Results</p> <p>We provide the first survey of the repeat structure of a large number of genomes. The analysis gives an upper-bound on the performance of genome assemblers for <it>de novo </it>reconstruction of genomes across a wide range of read lengths. Further, we demonstrate that the majority of genes in prokaryotic genomes can be reconstructed uniquely using very short reads even if the genomes themselves cannot. The non-reconstructible genes are overwhelmingly related to mobile elements (transposons, IS elements, and prophages).</p> <p>Conclusions</p> <p>Our results improve upon previous studies on the feasibility of assembly with short reads and provide a comprehensive benchmark against which to compare the performance of the short-read assemblers currently being developed.</p
    corecore