2,488 research outputs found
High definition systems in Japan
The successful implementation of a strategy to produce high-definition systems within the Japanese economy will favorably affect the fundamental competitiveness of Japan relative to the rest of the world. The development of an infrastructure necessary to support high-definition products and systems in that country involves major commitments of engineering resources, plants and equipment, educational programs and funding. The results of these efforts appear to affect virtually every aspect of the Japanese industrial complex. The results of assessments of the current progress of Japan toward the development of high-definition products and systems are presented. The assessments are based on the findings of a panel of U.S. experts made up of individuals from U.S. academia and industry, and derived from a study of the Japanese literature combined with visits to the primary relevant industrial laboratories and development agencies in Japan. Specific coverage includes an evaluation of progress in R&D for high-definition television (HDTV) displays that are evolving in Japan; high-definition standards and equipment development; Japanese intentions for the use of HDTV; economic evaluation of Japan's public policy initiatives in support of high-definition systems; management analysis of Japan's strategy of leverage with respect to high-definition products and systems
Bioinformatics advances in saliva diagnostics
There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance
basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval,
integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva diagnostics made possible by the Saliva Ontology and SDxMart
Mapping the strand-specific transcriptome of fission yeast
Pervasive genome-wide transcription is widespread in eukaryotic cells, but key features of the transcriptome have yet to be fully characterized. a new study using antibody-based detection of RNA-DNA duplexes on tiling arrays now reveals a complex, strand-specific transcriptional world in fission yeast
Recommended from our members
Error, reproducibility and sensitivity : a pipeline for data processing of Agilent oligonucleotide expression arrays
Background
Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples.
Results
We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2% of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log2 units ( 6% of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators.
Conclusions
This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells
MinION Analysis and Reference Consortium: Phase 1 data release and analysis
The advent of a miniaturized DNA sequencing device with a high-throughput contextual sequencing capability embodies the next generation of large scale sequencing tools. The MinION™ Access Programme (MAP) was initiated by Oxford Nanopore Technologies™ in April 2014, giving public access to their USB-attached miniature sequencing device. The MinION Analysis and Reference Consortium (MARC) was formed by a subset of MAP participants, with the aim of evaluating and providing standard protocols and reference data to the community. Envisaged as a multi-phased project, this study provides the global community with the Phase 1 data from MARC, where the reproducibility of the performance of the MinION was evaluated at multiple sites. Five laboratories on two continents generated data using a control strain of Escherichia coli K-12, preparing and sequencing samples according to a revised ONT protocol. Here, we provide the details of the protocol used, along with a preliminary analysis of the characteristics of typical runs including the consistency, rate, volume and quality of data produced. Further analysis of the Phase 1 data presented here, and additional experiments in Phase 2 of E. coli from MARC are already underway to identify ways to improve and enhance MinION performance
Developing and applying heterogeneous phylogenetic models with XRate
Modeling sequence evolution on phylogenetic trees is a useful technique in
computational biology. Especially powerful are models which take account of the
heterogeneous nature of sequence evolution according to the "grammar" of the
encoded gene features. However, beyond a modest level of model complexity,
manual coding of models becomes prohibitively labor-intensive. We demonstrate,
via a set of case studies, the new built-in model-prototyping capabilities of
XRate (macros and Scheme extensions). These features allow rapid implementation
of phylogenetic models which would have previously been far more
labor-intensive. XRate's new capabilities for lineage-specific models,
ancestral sequence reconstruction, and improved annotation output are also
discussed. XRate's flexible model-specification capabilities and computational
efficiency make it well-suited to developing and prototyping phylogenetic
grammar models. XRate is available as part of the DART software package:
http://biowiki.org/DART .Comment: 34 pages, 3 figures, glossary of XRate model terminolog
Correlated fragile site expression allows the identification of candidate fragile genes involved in immunity and associated with carcinogenesis
Common fragile sites (cfs) are specific regions in the human genome that are
particularly prone to genomic instability under conditions of replicative
stress. Several investigations support the view that common fragile sites play
a role in carcinogenesis. We discuss a genome-wide approach based on graph
theory and Gene Ontology vocabulary for the functional characterization of
common fragile sites and for the identification of genes that contribute to
tumour cell biology. CFS were assembled in a network based on a simple measure
of correlation among common fragile site patterns of expression. By applying
robust measurements to capture in quantitative terms the non triviality of the
network, we identified several topological features clearly indicating
departure from the Erdos-Renyi random graph model. The most important outcome
was the presence of an unexpected large connected component far below the
percolation threshold. Most of the best characterized common fragile sites
belonged to this connected component. By filtering this connected component
with Gene Ontology, statistically significant shared functional features were
detected. Common fragile sites were found to be enriched for genes associated
to the immune response and to mechanisms involved in tumour progression such as
extracellular space remodeling and angiogenesis. Our results support the
hypothesis that fragile sites serve a function; we propose that fragility is
linked to a coordinated regulation of fragile genes expression.Comment: 18 pages, accepted for publication in BMC Bioinformatic
Reactome knowledgebase of human biological pathways and processes
Reactome (http://www.reactome.org) is an expert-authored, peer-reviewed knowledgebase of human reactions and pathways that functions as a data mining resource and electronic textbook. Its current release includes 2975 human proteins, 2907 reactions and 4455 literature citations. A new entity-level pathway viewer and improved search and data mining tools facilitate searching and visualizing pathway data and the analysis of user-supplied high-throughput data sets. Reactome has increased its utility to the model organism communities with improved orthology prediction methods allowing pathway inference for 22 species and through collaborations to create manually curated Reactome pathway datasets for species including Arabidopsis, Oryza sativa (rice), Drosophila and Gallus gallus (chicken). Reactome's data content and software can all be freely used and redistributed under open source terms
Complex exon-intron marking by histone modifications is not determined solely by nucleosome distribution
It has recently been shown that nucleosome distribution, histone modifications and RNA polymerase II (Pol II) occupancy show preferential association with exons (“exon-intron marking”), linking chromatin structure and function to co-transcriptional splicing in a variety of eukaryotes. Previous ChIP-sequencing studies suggested that these marking patterns reflect the nucleosomal landscape. By analyzing ChIP-chip datasets across the human genome in three cell types, we have found that this marking system is far more complex than previously observed. We show here that a range of histone modifications and Pol II are preferentially associated with exons. However, there is noticeable cell-type specificity in the degree of exon marking by histone modifications and, surprisingly, this is also reflected in some histone modifications patterns showing biases towards introns. Exon-intron marking is laid down in the absence of transcription on silent genes, with some marking biases changing or becoming reversed for genes expressed at different levels. Furthermore, the relationship of this marking system with splicing is not simple, with only some histone modifications reflecting exon usage/inclusion, while others mirror patterns of exon exclusion. By examining nucleosomal distributions in all three cell types, we demonstrate that these histone modification patterns cannot solely be accounted for by differences in nucleosome levels between exons and introns. In addition, because of inherent differences between ChIP-chip array and ChIP-sequencing approaches, these platforms report different nucleosome distribution patterns across the human genome. Our findings confound existing views and point to active cellular mechanisms which dynamically regulate histone modification levels and account for exon-intron marking. We believe that these histone modification patterns provide links between chromatin accessibility, Pol II movement and co-transcriptional splicing
A Cryogenic Silicon Interferometer for Gravitational-wave Detection
The detection of gravitational waves from compact binary mergers by LIGO has opened the era of gravitational wave astronomy, revealing a previously hidden side of the cosmos. To maximize the reach of the existing LIGO observatory facilities, we have designed a new instrument that will have 5 times the range of Advanced LIGO, or greater than 100 times the event rate. Observations with this new instrument will make possible dramatic steps toward understanding the physics of the nearby universe, as well as observing the universe out to cosmological distances by the detection of binary black hole coalescences. This article presents the instrument design and a quantitative analysis of the anticipated noise floor
- …
