72 research outputs found

    Repeatability and Reproducibility of Decisions by Latent Fingerprint Examiners

    Get PDF
    The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. We tested latent print examiners on the extent to which they reached consistent decisions. This study assessed intra-examiner repeatability by retesting 72 examiners on comparisons of latent and exemplar fingerprints, after an interval of approximately seven months; each examiner was reassigned 25 image pairs for comparison, out of total pool of 744 image pairs. We compare these repeatability results with reproducibility (inter-examiner) results derived from our previous study. Examiners repeated 89.1% of their individualization decisions, and 90.1% of their exclusion decisions; most of the changed decisions resulted in inconclusive decisions. Repeatability of comparison decisions (individualization, exclusion, inconclusive) was 90.0% for mated pairs, and 85.9% for nonmated pairs. Repeatability and reproducibility were notably lower for comparisons assessed by the examiners as “difficult” than for “easy” or “moderate” comparisons, indicating that examiners' assessments of difficulty may be useful for quality assurance. No false positive errors were repeated (n = 4); 30% of false negative errors were repeated. One percent of latent value decisions were completely reversed (no value even for exclusion vs. of value for individualization). Most of the inter- and intra-examiner variability concerned whether the examiners considered the information available to be sufficient to reach a conclusion; this variability was concentrated on specific image pairs such that repeatability and reproducibility were very high on some comparisons and very low on others. Much of the variability appears to be due to making categorical decisions in borderline cases

    Quantitative assessment of the expanding complementarity between public and commercial databases of bioactive compounds

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Since 2004 public cheminformatic databases and their collective functionality for exploring relationships between compounds, protein sequences, literature and assay data have advanced dramatically. In parallel, commercial sources that extract and curate such relationships from journals and patents have also been expanding. This work updates a previous comparative study of databases chosen because of their bioactive content, availability of downloads and facility to select informative subsets.</p> <p>Results</p> <p>Where they could be calculated, extracted compounds-per-journal article were in the range of 12 to 19 but compound-per-protein counts increased with document numbers. Chemical structure filtration to facilitate standardised comparisons typically reduced source counts by between 5% and 30%. The pair-wise overlaps between 23 databases and subsets were determined, as well as changes between 2006 and 2008. While all compound sets have increased, PubChem has doubled to 14.2 million. The 2008 comparison matrix shows not only overlap but also unique content across all sources. Many of the detailed differences could be attributed to individual strategies for data selection and extraction. While there was a big increase in patent-derived structures entering PubChem since 2006, GVKBIO contains over 0.8 million unique structures from this source. Venn diagrams showed extensive overlap between compounds extracted by independent expert curation from journals by GVKBIO, WOMBAT (both commercial) and BindingDB (public) but each included unique content. In contrast, the approved drug collections from GVKBIO, MDDR (commercial) and DrugBank (public) showed surprisingly low overlap. Aggregating all commercial sources established that while 1 million compounds overlapped with PubChem 1.2 million did not.</p> <p>Conclusion</p> <p>On the basis of chemical structure content <it>per se </it>public sources have covered an increasing proportion of commercial databases over the last two years. However, commercial products included in this study provide links between compounds and information from patents and journals at a larger scale than current public efforts. They also continue to capture a significant proportion of unique content. Our results thus demonstrate not only an encouraging overall expansion of data-supported bioactive chemical space but also that both commercial and public sources are complementary for its exploration.</p

    Use of chromatin immunoprecipitation (ChIP) to detect transcription factor binding to highly homologous promoters in chromatin isolated from unstimulated and activated primary human B cells

    Get PDF
    The Chromatin Immunoprecipiation (ChIP) provides a powerful technique for identifying the in vivo association of transcription factors with regulatory elements. However, obtaining meaningful information for promoter interactions is extremely challenging when the promoter is a member of a class of highly homologous elements. Use of PCR primers with small numbers of mutations can limit cross-hybridization with non-targeted sequences and distinguish a pattern of binding for factors with the regulatory element of interest. In this report, we demonstrate the selective in vivo association of NF-κB, p300 and CREB with the human Iγ1 promoter located in the intronic region upstream of the Cγ1 exons in the immunoglobulin heavy chain locus. These methods have the ability to extend ChIP analysis to promoters with a high degree of homology

    Chronic Subdural Haematoma in the Elderly: Is It Time for a New Paradigm in Management?

    Get PDF
    Chronic subdural haematoma (CSDH) is a common neurological condition that usually affects the elderly. The optimal treatment strategy remains uncertain, principally because there is a lack of a good evidence base. In this paper, we review the literature concerning the peri-operative and operative care of patients. In particular, we highlight the non-surgical aspects of care that might impact on patient outcomes and CSDH recurrence. We propose that an integrated approach to care in patients with CSDH, similar to care of fragility fractures in the elderly, may be an important strategy to improve patient care and outcomes

    Visualizing Escherichia coli Sub-Cellular Structure Using Sparse Deconvolution Spatial Light Interference Tomography

    Get PDF
    Studying the 3D sub-cellular structure of living cells is essential to our understanding of biological function. However, tomographic imaging of live cells is challenging mainly because they are transparent, i.e., weakly scattering structures. Therefore, this type of imaging has been implemented largely using fluorescence techniques. While confocal fluorescence imaging is a common approach to achieve sectioning, it requires fluorescence probes that are often harmful to the living specimen. On the other hand, by using the intrinsic contrast of the structures it is possible to study living cells in a non-invasive manner. One method that provides high-resolution quantitative information about nanoscale structures is a broadband interferometric technique known as Spatial Light Interference Microscopy (SLIM). In addition to rendering quantitative phase information, when combined with a high numerical aperture objective, SLIM also provides excellent depth sectioning capabilities. However, like in all linear optical systems, SLIM's resolution is limited by diffraction. Here we present a novel 3D field deconvolution algorithm that exploits the sparsity of phase images and renders images with resolution beyond the diffraction limit. We employ this label-free method, called deconvolution Spatial Light Interference Tomography (dSLIT), to visualize coiled sub-cellular structures in E. coli cells which are most likely the cytoskeletal MreB protein and the division site regulating MinCDE proteins. Previously these structures have only been observed using specialized strains and plasmids and fluorescence techniques. Our results indicate that dSLIT can be employed to study such structures in a practical and non-invasive manner

    Seven Golden Rules for heuristic filtering of molecular formulas obtained by accurate mass spectrometry

    Get PDF
    BACKGROUND: Structure elucidation of unknown small molecules by mass spectrometry is a challenge despite advances in instrumentation. The first crucial step is to obtain correct elemental compositions. In order to automatically constrain the thousands of possible candidate structures, rules need to be developed to select the most likely and chemically correct molecular formulas. RESULTS: An algorithm for filtering molecular formulas is derived from seven heuristic rules: (1) restrictions for the number of elements, (2) LEWIS and SENIOR chemical rules, (3) isotopic patterns, (4) hydrogen/carbon ratios, (5) element ratio of nitrogen, oxygen, phosphor, and sulphur versus carbon, (6) element ratio probabilities and (7) presence of trimethylsilylated compounds. Formulas are ranked according to their isotopic patterns and subsequently constrained by presence in public chemical databases. The seven rules were developed on 68,237 existing molecular formulas and were validated in four experiments. First, 432,968 formulas covering five million PubChem database entries were checked for consistency. Only 0.6% of these compounds did not pass all rules. Next, the rules were shown to effectively reducing the complement all eight billion theoretically possible C, H, N, S, O, P-formulas up to 2000 Da to only 623 million most probable elemental compositions. Thirdly 6,000 pharmaceutical, toxic and natural compounds were selected from DrugBank, TSCA and DNP databases. The correct formulas were retrieved as top hit at 80–99% probability when assuming data acquisition with complete resolution of unique compounds and 5% absolute isotope ratio deviation and 3 ppm mass accuracy. Last, some exemplary compounds were analyzed by Fourier transform ion cyclotron resonance mass spectrometry and by gas chromatography-time of flight mass spectrometry. In each case, the correct formula was ranked as top hit when combining the seven rules with database queries. CONCLUSION: The seven rules enable an automatic exclusion of molecular formulas which are either wrong or which contain unlikely high or low number of elements. The correct molecular formula is assigned with a probability of 98% if the formula exists in a compound database. For truly novel compounds that are not present in databases, the correct formula is found in the first three hits with a probability of 65–81%. Corresponding software and supplemental data are available for downloads from the authors' website

    Changes to the Fossil Record of Insects through Fifteen Years of Discovery

    Get PDF
    The first and last occurrences of hexapod families in the fossil record are compiled from publications up to end-2009. The major features of these data are compared with those of previous datasets (1993 and 1994). About a third of families (>400) are new to the fossil record since 1994, over half of the earlier, existing families have experienced changes in their known stratigraphic range and only about ten percent have unchanged ranges. Despite these significant additions to knowledge, the broad pattern of described richness through time remains similar, with described richness increasing steadily through geological history and a shift in dominant taxa, from Palaeoptera and Polyneoptera to Paraneoptera and Holometabola, after the Palaeozoic. However, after detrending, described richness is not well correlated with the earlier datasets, indicating significant changes in shorter-term patterns. There is reduced Palaeozoic richness, peaking at a different time, and a less pronounced Permian decline. A pronounced Triassic peak and decline is shown, and the plateau from the mid Early Cretaceous to the end of the period remains, albeit at substantially higher richness compared to earlier datasets. Origination and extinction rates are broadly similar to before, with a broad decline in both through time but episodic peaks, including end-Permian turnover. Origination more consistently exceeds extinction compared to previous datasets and exceptions are mainly in the Palaeozoic. These changes suggest that some inferences about causal mechanisms in insect macroevolution are likely to differ as well
    corecore