26 research outputs found

    Genome Resources for Climate‐Resilient Cowpea, an Essential Crop for Food Security

    Get PDF
    Cowpea (Vigna unguiculata L. Walp.) is a legume crop that is resilient to hot and drought‐prone climates, and a primary source of protein in sub‐Saharan Africa and other parts of the developing world. However, genome resources for cowpea have lagged behind most other major crops. Here we describe foundational genome resources and their application to the analysis of germplasm currently in use in West African breeding programs. Resources developed from the African cultivar IT97K‐499‐35 include a whole‐genome shotgun (WGS) assembly, a bacterial artificial chromosome (BAC) physical map, and assembled sequences from 4355 BACs. These resources and WGS sequences of an additional 36 diverse cowpea accessions supported the development of a genotyping assay for 51 128 SNPs, which was then applied to five bi‐parental RIL populations to produce a consensus genetic map containing 37 372 SNPs. This genetic map enabled the anchoring of 100 Mb of WGS and 420 Mb of BAC sequences, an exploration of genetic diversity along each linkage group, and clarification of macrosynteny between cowpea and common bean. The SNP assay enabled a diversity analysis of materials from West African breeding programs. Two major subpopulations exist within those materials, one of which has significant parentage from South and East Africa and more diversity. There are genomic regions of high differentiation between subpopulations, one of which coincides with a cluster of nodulin genes. The new resources and knowledge help to define goals and accelerate the breeding of improved varieties to address food security issues related to limited‐input small‐holder farming and climate stress

    Sequencing of 15 622 Gene-bearing BACs Clarifies the Gene-dense Regions of the Barley Genome

    Get PDF
    Barley (Hordeum vulgare L.) possesses a large and highly repetitive genome of 5.1 Gb that has hindered the development of a complete sequence. In 2012, the International Barley Sequencing Consortium released a resource integrating whole-genome shotgun sequences with a physical and genetic framework. However, because only 6278 bacterial artificial chromosome (BACs) in the physical map were sequenced, fine structure was limited. To gain access to the gene-containing portion of the barley genome at high resolution, we identified and sequenced 15 622 BACs representing the minimal tiling path of 72 052 physical-mapped gene-bearing BACs. This generated ~1.7 Gb of genomic sequence containing an estimated 2/3 of all Morex barley genes. Exploration of these sequenced BACs revealed that although distal ends of chromosomes contain most of the gene-enriched BACs and are characterized by high recombination rates, there are also gene-dense regions with suppressed recombination. We made use of published map-anchored sequence data from Aegilops tauschii to develop a synteny viewer between barley and the ancestor of the wheat D-genome. Except for some notable inversions, there is a high level of collinearity between the two species. The software HarvEST:Barley provides facile access to BAC sequences and their annotations, along with the barley–Ae. tauschii synteny viewer. These BAC sequences constitute a resource to improve the efficiency of marker development, map-based cloning, and comparative genomics in barley and related crops. Additional knowledge about regions of the barley genome that are gene-dense but low recombination is particularly relevant

    Generating and Reversing Chronic Wounds in Diabetic Mice by Manipulating Wound Redox Parameters

    Get PDF
    By 2025, more than 500 M people worldwide will suffer from diabetes; 125 M will develop foot ulcer(s) and 20 M will undergo an amputation, creating a major health problem. Understanding how these wounds become chronic will provide insights to reverse chronicity. We hypothesized that oxidative stress (OS) in wounds is a critical component for generation of chronicity. We used the db/db mouse model of impaired healing and inhibited, at time of injury, two major antioxidant enzymes, catalase and glutathione peroxidase, creating high OS in the wounds. This was necessary and sufficient to trigger wounds to become chronic. The wounds initially contained a polymicrobial community that with time selected for specific biofilm-forming bacteria. To reverse chronicity we treated the wounds with the antioxidants α-tocopherol and N-acetylcysteine and found that OS was highly reduced, biofilms had increased sensitivity to antibiotics, and granulation tissue was formed with proper collagen deposition and remodeling. We show for the first time generation of chronic wounds in which biofilm develops spontaneously, illustrating importance of early and continued redox imbalance coupled with the presence of biofilm in development of wound chronicity. This model will help decipher additional mechanisms and potentially better diagnosis of chronicity and treatment of human chronic wounds

    Scrible: Ultra-Accurate Error-Correction of Pooled Sequenced Reads

    Full text link
    Abstract. We recently proposed a novel clone-by-clone protocol for de novo genome sequencing that leverages combinatorial pooling design to overcome the limitations of DNA barcoding when multiplexing a large number of samples on second-generation sequencing instruments. Here we address the problem of correcting the short reads obtained from our sequencing protocol. We introduce a novel algorithm called Scrible that exploits properties of the pooling design to accurately identify/correct sequencing errors and minimize the chance of “over-correcting”. Exper-imental results on synthetic data on the rice genome demonstrate that our method has much higher accuracy in correcting short reads com-pared to state-of-the-art error-correcting methods. On real data on the barley genome we show that Scrible significantly improves the decoding accuracy of short reads to individual BACs.

    De novo meta-assembly of ultra-deep sequencing data

    No full text

    Efficient Methods for Analysis of Ultra-Deep Sequencing Data

    No full text
    Thanks to continuous improvements in sequencing technologies, life scientists can now easily sequence DNA at depth of sequencing coverage in excess of 1,000x, especially for smaller genomes like viruses, bacteria or BAC/YAC clones. As “ultra deep” sequencing becomes more and more common, it is expected to create new algorithmic challenges in the analysis pipeline. In this dissertation, I explore the effect of ultra-deep sequencing data in two domains: (i) the problem of decoding reads to bacterial artificial chromosome (BAC) clones and (ii) the problem of de novo assembly of BAC clones. Using real ultra-deep sequencing data, I show that when the depth of sequencing increases over a certain threshold, sequencing errors make these two problems harder and harder (instead of easier, as one would expect with error-free data), and as a consequence the quality of the solution degrades with more and more data. For the first problem, I propose an effective solution based on “divide and conquer”: the method ‘slices’ a large dataset into smaller samples of optimal size, decodes each slice independently, and then merges the results. For the second problem, I show for the first time that modern de novo assemblers cannot take advantage of ultra-deep sequencing data. I then introduce a new divide and conquer approach to deal with the problem of de novo genome assembly in the presence of ultra-deep sequencing data.Finally, I report on a novel computational protocol to discover high quality SNPs for cowpea genome. I show how the knowledge of approximate SNP order can be used to order and merge BAC clones and WGS contigs

    De novo meta-assembly of ultra-deep sequencing data.

    Get PDF
    UnlabelledWe introduce a new divide and conquer approach to deal with the problem of de novo genome assembly in the presence of ultra-deep sequencing data (i.e. coverage of 1000x or higher). Our proposed meta-assembler Slicembler partitions the input data into optimal-sized 'slices' and uses a standard assembly tool (e.g. Velvet, SPAdes, IDBA_UD and Ray) to assemble each slice individually. Slicembler uses majority voting among the individual assemblies to identify long contigs that can be merged to the consensus assembly. To improve its efficiency, Slicembler uses a generalized suffix tree to identify these frequent contigs (or fraction thereof). Extensive experimental results on real ultra-deep sequencing data (8000x coverage) and simulated data show that Slicembler significantly improves the quality of the assembly compared with the performance of the base assembler. In fact, most of the times, Slicembler generates error-free assemblies. We also show that Slicembler is much more resistant against high sequencing error rate than the base assembler.Availability and implementationSlicembler can be accessed at http://slicembler.cs.ucr.edu/

    De novo

    No full text
    corecore