89 research outputs found

    Nucleic acid extraction from formalin-fixed paraffin-embedded cancer cell line samples: a trade off between quantity and quality?

    Get PDF
    Background: Advanced genomic techniques such as Next-Generation-Sequencing (NGS) and gene expression profiling, including NanoString, are vital for the development of personalised medicines, as they enable molecular disease classification. This has become increasingly important in the treatment of cancer, aiding patient selection. However, it requires efficient nucleic acid extraction often from formalin-fixed paraffin-embedded tissue (FFPE). Methods: Here we provide a comparison of several commercially available manual and automated methods for DNA and/or RNA extraction from FFPE cancer cell line samples from Qiagen, life Technologies and Promega. Differing extraction geometric mean yields were evaluated across each of the kits tested, assessing dual DNA/RNA extraction vs. specialised single extraction, manual silica column based extraction techniques vs. automated magnetic bead based methods along with a comparison of subsequent nucleic acid purity methods, providing a full evaluation of nucleic acids isolated. Results: Out of the four RNA extraction kits evaluated the RNeasy FFPE kit, from Qiagen, gave superior geometric mean yields, whilst the Maxwell 16 automated method, from Promega, yielded the highest quality RNA by quantitative real time RT-PCR. Of the DNA extraction kits evaluated the PicoPure DNA kit, from Life Technologies, isolated 2–14× more DNA. A miniaturised qPCR assay was developed for DNA quantification and quality assessment. Conclusions: Careful consideration of an extraction kit is necessary dependent on quality or quantity of material required. Here we provide a flow diagram on the factors to consider when choosing an extraction kit as well as how to accurately quantify and QC the extracted material

    Design and implementation of a generalized laboratory data model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Investigators in the biological sciences continue to exploit laboratory automation methods and have dramatically increased the rates at which they can generate data. In many environments, the methods themselves also evolve in a rapid and fluid manner. These observations point to the importance of robust information management systems in the modern laboratory. Designing and implementing such systems is non-trivial and it appears that in many cases a database project ultimately proves unserviceable.</p> <p>Results</p> <p>We describe a general modeling framework for laboratory data and its implementation as an information management system. The model utilizes several abstraction techniques, focusing especially on the concepts of inheritance and meta-data. Traditional approaches commingle event-oriented data with regular entity data in <it>ad hoc </it>ways. Instead, we define distinct regular entity and event schemas, but fully integrate these via a standardized interface. The design allows straightforward definition of a "processing pipeline" as a sequence of events, obviating the need for separate workflow management systems. A layer above the event-oriented schema integrates events into a workflow by defining "processing directives", which act as automated project managers of items in the system. Directives can be added or modified in an almost trivial fashion, i.e., without the need for schema modification or re-certification of applications. Association between regular entities and events is managed via simple "many-to-many" relationships. We describe the programming interface, as well as techniques for handling input/output, process control, and state transitions.</p> <p>Conclusion</p> <p>The implementation described here has served as the Washington University Genome Sequencing Center's primary information system for several years. It handles all transactions underlying a throughput rate of about 9 million sequencing reactions of various kinds per month and has handily weathered a number of major pipeline reconfigurations. The basic data model can be readily adapted to other high-volume processing environments.</p

    Comparative analysis of four methods to extract DNA from paraffin-embedded tissues: effect on downstream molecular applications

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A large portion of tissues stored worldwide for diagnostic purposes is formalin-fixed and paraffin-embedded (FFPE). These FFPE-archived tissues are an extremely valuable source for retrospective (genetic) studies. These include mutation screening in cancer-critical genes as well as pathogen detection. In this study we evaluated the impact of several widely used DNA extraction methods on the quality of molecular diagnostics on FFPE tissues.</p> <p>Findings</p> <p>We compared 4 DNA extraction methods from 4 identically processed FFPE mammary-, prostate-, colon- and lung tissues with regard to PCR inhibition, real time SNP detection and amplifiable fragment size. The extraction methods, with and without proteinase K pre-treatment, tested were: 1) heat-treatment, 2) QIAamp DNA-blood-mini-kit, 3) EasyMAG NucliSens and 4) Gentra Capture-Column-kit.</p> <p>Amplifiable DNA fragment size was assessed by multiplexed 200-400-600 bp PCR and appeared highly influenced by the extraction method used. Proteinase K pre-treatment was a prerequisite for proper purification of DNA from FFPE. Extractions with QIAamp, EasyMAG and heat-treatment were found suitable for amplification of fragments up to 400 bp from all tissues, 600 bp amplification was marginally successful (best was QIAamp). QIAamp and EasyMAG extracts were found suitable for downstream real time SNP detection. Gentra extraction was unsuitable. Hands-on time was lowest for heat-treatment, followed by EasyMAG.</p> <p>Conclusions</p> <p>We conclude that the extraction method plays an important role with regard to performance in downstream molecular applications.</p

    Initial sequencing and analysis of the human genome

    Full text link
    The human genome holds an extraordinary trove of information about human development, physiology, medicine and evolution. Here we report the results of an international collaboration to produce and make freely available a draft sequence of the human genome. We also present an initial analysis of the data, describing some of the insights that can be gleaned from the sequence.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/62798/1/409860a0.pd

    Genome engineering for improved recombinant protein expression in Escherichia coli

    Get PDF

    Metabolic engineering of central carbon metabolism in Escherichia coli : improving the production of biomass and metabolites

    Get PDF
    The pathway for central carbon metabolism provides precursors for cell biosynthesis and metabolite synthesis along with ATP and NADH. We investigated the metabolic engineering of one of the branches of the central carbon pathways: the pathway of glycogen synthesis and degradation. We were motivated in selecting the glycogen pathway for genetic manipulation by the literature on acetate production in E. coli. The literature indicates that in aerobic cultures the uptake of nutrients occurred faster than the utilization of the precursors, formed from the nutrients, in making biomass and energy. We decided to sequester the excess carbon in glycogen which is a storage polymer. We also devised vectors to degrade the sequestered glycogen. The effects, possible causes of the effects, and potential applications of the sequestering of carbon in the form of glycogen, sometimes combined with engineered degradation of the sequestered glycogen, have been the subject of this thesis. This manipulation of the glycogen pathway yielded practically useful results. The metabolic engineering was done in an Escherichia coli mutant defective in acetate biosynthesis due to deletion of the ack (acetate kinase) and pta (phosphotransacetylase) genes. The sequestering of glycogen was achieved by transforming cells with a plasmid containing the glycogen biosynthesis genes glgC (encoding ADPG pyrophosphorylase) and glgA (encoding glycogen synthase) under the control of the IPTG-inducible tac promoter. If glycogen overproduction in the ack pta strain grown in complex medium was induced during late log-phase, biomass production increased by 15 - 20% relative to uninduced controls. When glycogen was sequestered and then degraded in E. coli cultures grown in minimal medium, by overamplifying the genes for glycogen synthesis and degradation, then glutamate production was increased almost 3-fold compared to the plasmid-free strain. When glycogen was sequestered, we observed changes in some of the secreted end-products. We observed that, after overproduction of glycogen, uptake of the previously secreted pyruvate was increased with respect to the control strain, and the CO2 production rate was also increased. These dual observations suggest an increased activity of the gluconeogenic pathways or the TCA cycle. The increase in glutamate, when glycogen sequestering was combined with degradation, also indicate an increase in TCA flux. Comparison of cAMP levels with and without glycogen overproduction indicate a higher level in cAMP after glycogen is overproduced. There appears to be a tentative link, though not conclusive, between cAMP synthesis and glycogen synthesis pathway. cAMP is a global regulator of central carbon metabolism including many genes of the TCA cycle enzymes. By affecting the TCA flux, cAMP may be one of the causes behind the pleiotropic effects of glycogen overproduction and degradation

    Kaleidaseq: A Web-Based Tool to Monitor Data Flow in a High Throughput Sequencing Facility

    No full text
    Tracking data flow in high throughput sequencing is important in maintaining a consistent number of successfully sequenced samples, making decisions on scheduling the flow of sequencing steps, resolving problems at various steps and tracking the status of different projects. This is especially critical when the laboratory is handling a multitude of projects. We have built a Web-based data flow tracking package, called Kaleidaseq, which allows us to monitor the flow and quality of sequencing samples through the steps of preparation of library plates, plaque-picking, preparation of templates, conducting sequencing reactions, loading of samples on gels, base-calling the traces, and calculating the quality of the sequenced samples. Kaleidaseq’s suite of displays allows for outstanding monitoring of the production sequencing process. The online display of current information that Kaleidaseq provides on both project status and process queues sorted by project enables accurate real-time assessment of the necessary samples that must be processed to complete the project. This information allows the process manager to allocate future resources optimally and schedule tasks according to scientific priorities. Quality of the sequenced samples can be tracked on a daily basis, which allows the sequencing laboratory to maintain a steady performance level and quickly resolve dips in quality. Kaleidaseq has a simple easy-to-use interface that allows access to all major functions and process queues from one Web page. This software package is modular and designed to allow additional processing steps and new monitoring variables to be added and tracked with ease. Access to the underlying relational database is through the Perl DBI interface, which allows for the use of different relational databases. Kaleidaseq is available for free use by the academic community from http://www.cshl.org/kaleidaseq
    corecore