210 research outputs found

    Semiempirical Molecular Orbital Calculations Ferrocene.

    Get PDF

    Autonomous, Collaborative, Unmanned Aerial Vehicles for Search and Rescue

    Get PDF
    Search and Rescue is a vitally important subject, and one which can be improved through the use of modern technology. This work presents a number of advances aimed towards the creation of a swarm of autonomous, collaborative, unmanned aerial vehicles for land-based search and rescue. The main advances are the development of a diffusion based search strategy for route planning, research into GPS (including the Durham Tracker Project and statistical research into altitude errors), and the creation of a relative positioning system (including discussion of the errors caused by fast-moving units). Overviews are also given of the current state of research into both UAVs and Search and Rescue

    Stand-off 3D face imaging and vibrometry for biometric identification using digital holography

    Get PDF
    Lockheed Martin Coherent Technologies (LMCT) has demonstrated 3D face imaging at ~ 1-2 mm lateral resolution and range precision at stand-off distances up to 100 m using digital holography. LMCT has also demonstrated the digital holography technique in a multi-pixel vibrometry mode in the laboratory. In this paper, we report on 3D face imaging using multiple-source (MS) and multiple-wavelength (MW) digital holography breadboards. We will briefly discuss the theory of 3D imaging using MS and MW digital holography with references to the literature. We will also briefly discuss the theory of vibrometry using a digital holographic setup. We then describe our implementation of these techniques in breadboard setups operating at 1550 nm wavelength (for MS digital holography) and at wavelengths near 1617 nm (for MW digital holography). We also present experimental results for 3D imaging and for vibrometry with these digital holographic setups

    Citizen participation in news

    No full text
    The process of producing news has changed significantly due to the advent of the Web, which has enabled the increasing involvement of citizens in news production. This trend has been given many names, including participatory journalism, produsage, and crowd-sourced journalism, but these terms are ambiguous and have been applied inconsistently, making comparison of news systems difficult. In particular, it is problematic to distinguish the levels of citizen involvement, and therefore the extent to which news production has genuinely been opened up. In this paper we perform an analysis of 32 online news systems, comparing them in terms of how much power they give to citizens at each stage of the news production process. Our analysis reveals a diverse landscape of news systems and shows that they defy simplistic categorisation, but it also provides the means to compare different approaches in a systematic and meaningful way. We combine this with four case studies of individual stories to explore the ways that news stories can move and evolve across this landscape. Our conclusions are that online news systems are complex and interdependent, and that most do not involve citizens to the extent that the terms used to describe them imply

    Quantifying single nucleotide variant detection sensitivity in exome sequencing

    Get PDF
    BACKGROUND: The targeted capture and sequencing of genomic regions has rapidly demonstrated its utility in genetic studies. Inherent in this technology is considerable heterogeneity of target coverage and this is expected to systematically impact our sensitivity to detect genuine polymorphisms. To fully interpret the polymorphisms identified in a genetic study it is often essential to both detect polymorphisms and to understand where and with what probability real polymorphisms may have been missed. RESULTS: Using down-sampling of 30 deeply sequenced exomes and a set of gold-standard single nucleotide variant (SNV) genotype calls for each sample, we developed an empirical model relating the read depth at a polymorphic site to the probability of calling the correct genotype at that site. We find that measured sensitivity in SNV detection is substantially worse than that predicted from the naive expectation of sampling from a binomial. This calibrated model allows us to produce single nucleotide resolution SNV sensitivity estimates which can be merged to give summary sensitivity measures for any arbitrary partition of the target sequences (nucleotide, exon, gene, pathway, exome). These metrics are directly comparable between platforms and can be combined between samples to give “power estimates” for an entire study. We estimate a local read depth of 13X is required to detect the alleles and genotype of a heterozygous SNV 95% of the time, but only 3X for a homozygous SNV. At a mean on-target read depth of 20X, commonly used for rare disease exome sequencing studies, we predict 5–15% of heterozygous and 1–4% of homozygous SNVs in the targeted regions will be missed. CONCLUSIONS: Non-reference alleles in the heterozygote state have a high chance of being missed when commonly applied read coverage thresholds are used despite the widely held assumption that there is good polymorphism detection at these coverage levels. Such alleles are likely to be of functional importance in population based studies of rare diseases, somatic mutations in cancer and explaining the “missing heritability” of quantitative traits

    Statistically-Estimated Tree Composition for the Northeastern United States at Euro-American Settlement

    Get PDF
    We present a gridded 8 km-resolution data product of the estimated composition of tree taxa at the time of Euro-American settlement of the northeastern United States and the statistical methodology used to produce the product from trees recorded by land surveyors. Composition is defined as the proportion of stems larger than approximately 20 cm diameter at breast height for 22 tree taxa, generally at the genus level. The data come from settlement-era public survey records that are transcribed and then aggregated spatially, giving count data. The domain is divided into two regions, eastern (Maine to Ohio) and midwestern (Indiana to Minnesota). Public Land Survey point data in the midwestern region (ca. 0.8-km resolution) are aggregated to a regular 8 km grid, while data in the eastern region, from Town Proprietor Surveys, are aggregated at the township level in irregularly-shaped local administrative units. The product is based on a Bayesian statistical model fit to the count data that estimates composition on the 8 km grid across the entire domain. The statistical model is designed to handle data from both the regular grid and the irregularly-shaped townships and allows us to estimate composition at locations with no data and to smooth over noise caused by limited counts in locations with data. Critically, the model also allows us to quantify uncertainty in our composition estimates, making the product suitable for applications employing data assimilation. We expect this data product to be useful for understanding the state of vegetation in the northeastern United States prior to large-scale Euro-American settlement. In addition to specific regional questions, the data product can also serve as a baseline against which to investigate how forests and ecosystems change after intensive settlement. The data product is being made available at the NIS data portal as version 1.0

    Team Update on North American Proton Facilities for Radiation Testing

    Get PDF
    In the wake of the closure of the Indiana University Cyclotron Facility (IUCF), this presentation provides an overview of the options for North American proton facilities. This includes those in use by the aerospace community as well as new additions from the cancer therapy regime. In addition, proton single event testing background is provided for understanding the criteria needed for these facilities for electronics testing
    corecore