7,028 research outputs found
Critical parameters for efficient sonication and improved chromatin immunoprecipitation of high molecular weight proteins
Solubilization of cross-linked cells followed by chromatin shearing is essential for successful chromatin immunoprecipitation (ChIP). However, this task, typically accomplished by ultrasound treatment, may often become a pitfall of the process, due to inconsistent results obtained between different experiments under seemingly identical conditions. To address this issue we systematically studied ultrasound-mediated cell lysis and chromatin shearing, identified critical parameters of the process and formulated a generic strategy for rational optimization of ultrasound treatment. We also demonstrated that whereas ultrasound treatment required to shear chromatin to within a range of 100–400 bp typically degrades large proteins, a combination of brief sonication and benzonase digestion allows for the generation of similarly sized chromatin fragments while preserving the integrity of associated proteins. This approach should drastically improve ChIP efficiency for this class of proteins
Recommended from our members
Engineering Polymer Informatics
The poster describes a strategy of for the development of polymer informatics. In particular, the development of polymer markup language, a polymer ontology and natural language processing tools for polymer literature
Mining chemical information from Open patents
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.Abstract Linked Open Data presents an opportunity to vastly improve the quality of science in all fields by increasing the availability and usability of the data upon which it is based. In the chemical field, there is a huge amount of information available in the published literature, the vast majority of which is not available in machine-understandable formats. PatentEye, a prototype system for the extraction and semantification of chemical reactions from the patent literature has been implemented and is discussed. A total of 4444 reactions were extracted from 667 patent documents that comprised 10 weeks' worth of publications from the European Patent Office (EPO), with a precision of 78% and recall of 64% with regards to determining the identity and amount of reactants employed and an accuracy of 92% with regards to product identification. NMR spectra reported as product characterisation data are additionally captured.Peer Reviewe
Recommended from our members
Re-evaluating the place of urban planning history
This commentary seeks to prompt new discussion about the place of urban planning history in the era of contemporary globalisation. Given the deep historic engagement of urban planning thought and practice with ‘place’ shaping and thus with the constitution of society, culture and politics, we ask how relevant is planning's legacy to the shaping of present day cities. Late twentieth century urban sociology, cultural and economic geography have demonstrated the increasing significance of intercity relations and the functional porosity of metropolitan boundaries in the network society, however statutory urban planning systems remain tied to the administrative geographies of states. This ‘territorial fixing’ of practice constrains the operational space of planning and, we argue, also limits its vision to geopolitical scales and agendas that have receding relevance for emerging urban relations. We propose that a re-evaluation of planning history could have an important part to play in addressing this spatial conundrum
ChemicalTagger: A tool for semantic text-mining in chemistry.
BACKGROUND: The primary method for scientific communication is in the form of published scientific articles and theses which use natural language combined with domain-specific terminology. As such, they contain free owing unstructured text. Given the usefulness of data extraction from unstructured literature, we aim to show how this can be achieved for the discipline of chemistry. The highly formulaic style of writing most chemists adopt make their contributions well suited to high-throughput Natural Language Processing (NLP) approaches. RESULTS: We have developed the ChemicalTagger parser as a medium-depth, phrase-based semantic NLP tool for the language of chemical experiments. Tagging is based on a modular architecture and uses a combination of OSCAR, domain-specific regex and English taggers to identify parts-of-speech. The ANTLR grammar is used to structure this into tree-based phrases. Using a metric that allows for overlapping annotations, we achieved machine-annotator agreements of 88.9% for phrase recognition and 91.9% for phrase-type identification (Action names). CONCLUSIONS: It is possible parse to chemical experimental text using rule-based techniques in conjunction with a formal grammar parser. ChemicalTagger has been deployed for over 10,000 patents and has identified solvents from their linguistic context with >99.5% precision.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
T-Duality: Topology Change from H-flux
T-duality acts on circle bundles by exchanging the first Chern class with the
fiberwise integral of the H-flux, as we motivate using E_8 and also using
S-duality. We present known and new examples including NS5-branes,
nilmanifolds, Lens spaces, both circle bundles over RP^n, and the AdS^5 x S^5
to AdS^5 x CP^2 x S^1 with background H-flux of Duff, Lu and Pope. When
T-duality leads to M-theory on a non-spin manifold the gravitino partition
function continues to exist due to the background flux, however the known
quantization condition for G_4 fails. In a more general context, we use
correspondence spaces to implement isomorphisms on the twisted K-theories and
twisted cohomology theories and to study the corresponding
Grothendieck-Riemann-Roch theorem. Interestingly, in the case of decomposable
twists, both twisted theories admit fusion products and so are naturally rings.Comment: 36 pages, latex2e, uses xypic package. Made only a few superficial
changes in the manuscrip
An Algorithm for Fitting Mixtures of Gompertz Distributions to Censored Survival Data
We consider the fitting of a mixture of two Gompertz distributions to censored survival data. This model is therefore applicable where there are two distinct causes for failure that act in a mutually exclusive manner, and the baseline failure time for each cause follows a Gompertz distribution. For example, in a study of a disease such as breast cancer, suppose that failure corresponds to death, whose cause is attributed either to breast cancer or some other cause. In this example, the mixing proportion for the component of the mixture representing time to death from a cause other than breast cancer may be interpreted to be the cure rate for breast cancer (Gordon,'90a and'90b). This Gompertz mixture model whose components are adjusted multiplicatively to reflect the age of the patient at the origin of the survival time, is fitted by maximum likelihood via the EM algorithm (Dempster, Laird and Rubin,'77). There is the provision to handle the case where the mixing proportions are formulated in terms of a logistic model to depend on a vector of covariates associated with each survival time. The algorithm can also handle the case where there is only one cause of failure, but which may happen at infinity for some patients with a nonzero probability (Farewell,'82).
OSCAR4: a flexible architecture for chemical text-mining
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.Abstract The Open-Source Chemistry Analysis Routines (OSCAR) software, a toolkit for the recognition of named entities and data in chemistry publications, has been developed since 2002. Recent work has resulted in the separation of the core OSCAR functionality and its release as the OSCAR4 library. This library features a modular API (based on reduction of surface coupling) that permits client programmers to easily incorporate it into external applications. OSCAR4 offers a domain-independent architecture upon which chemistry specific text-mining tools can be built, and its development and usage are discussed.Peer Reviewe
Renormalization-Scale-Invariant PQCD Predictions for R_e+e- and the Bjorken Sum Rule at Next-to-Leading Order
We discuss application of the physical QCD effective charge ,
defined via the heavy-quark potential, in perturbative calculations at
next-to-leading order. When coupled with the Brodsky-Lepage-Mackenzie
prescription for fixing the renormalization scales, the resulting series are
automatically and naturally scale and scheme independent, and represent
unambiguous predictions of perturbative QCD. We consider in detail such
commensurate scale relations for the annihilation ratio
and the Bjorken sum rule. In both cases the improved predictions are in
excellent agreement with experiment.Comment: 13 Latex pages with 5 figures; to be published in Physical Review
- …
