506 research outputs found

    Basic hydrogeologic and remote sensing data for selection of sanitary landfill sites

    Get PDF
    Solid waste disposal were studied in Volusia County to protect the water supply in the area. Highlands in this County are of limited areal extent and, most significantly, the sand hills and ridges are in areas where recharge of the Floridan aquifer occurs. This study proves that well drained soils meeting the current State requirements are of limited areal extent. These areas should not be utilized as sanitary landfill sites! Rather, it is recommended that the Tomoka Farm Road site into the adjacent wetlands be extended. The County site on Rima Ridge recommended by Greenleaf-Telesca as the primary waste burial site in the County should be re-evaluated because of potential danger to the Daytona Beach water supply

    Hydraulics and geology related to beach restoration in Lee County, Florida

    Get PDF
    The erosion problem on Captiva Island is discussed. It is due to a deficit in the sand budget of the littoral drift system; a system with losses due to attrition of the particles and mass losses into the lagoons, to offshore, and to lateral transport. The effect that reopening Blind Pass would have, and the placement of sediment retaining structures in the surf zone at the northern and southern limits of the Captiva beach system, wave examined. A geological approach was used to study the origin and dynamic changes that have occurred. Through hydraulic modeling, changes that will occur by reopening and stabilizing Blind Pass are predicted. It is concluded that if the island is to be stabilized, beach nourishment with proper amounts and particle size is a necessity and that jetties adequate to restrict lateral and offshore losses are essential. It is shown that the reopening of Blind Pass would have minimal effects on the passes to the north and south, and would improve the environmental conditions in the sound with no adverse effects on the beach system

    Quantifying single nucleotide variant detection sensitivity in exome sequencing

    Get PDF
    BACKGROUND: The targeted capture and sequencing of genomic regions has rapidly demonstrated its utility in genetic studies. Inherent in this technology is considerable heterogeneity of target coverage and this is expected to systematically impact our sensitivity to detect genuine polymorphisms. To fully interpret the polymorphisms identified in a genetic study it is often essential to both detect polymorphisms and to understand where and with what probability real polymorphisms may have been missed. RESULTS: Using down-sampling of 30 deeply sequenced exomes and a set of gold-standard single nucleotide variant (SNV) genotype calls for each sample, we developed an empirical model relating the read depth at a polymorphic site to the probability of calling the correct genotype at that site. We find that measured sensitivity in SNV detection is substantially worse than that predicted from the naive expectation of sampling from a binomial. This calibrated model allows us to produce single nucleotide resolution SNV sensitivity estimates which can be merged to give summary sensitivity measures for any arbitrary partition of the target sequences (nucleotide, exon, gene, pathway, exome). These metrics are directly comparable between platforms and can be combined between samples to give “power estimates” for an entire study. We estimate a local read depth of 13X is required to detect the alleles and genotype of a heterozygous SNV 95% of the time, but only 3X for a homozygous SNV. At a mean on-target read depth of 20X, commonly used for rare disease exome sequencing studies, we predict 5–15% of heterozygous and 1–4% of homozygous SNVs in the targeted regions will be missed. CONCLUSIONS: Non-reference alleles in the heterozygote state have a high chance of being missed when commonly applied read coverage thresholds are used despite the widely held assumption that there is good polymorphism detection at these coverage levels. Such alleles are likely to be of functional importance in population based studies of rare diseases, somatic mutations in cancer and explaining the “missing heritability” of quantitative traits

    J Musculoskelet Neuronal Interact

    No full text
    Long-term bed-rest is used to simulate the effect of spaceflight on the human body and test different kinds of countermeasures. The 2nd Berlin BedRest Study (BBR2-2) tested the efficacy of whole-body vibration in addition to high-load resisitance exercise in preventing bone loss during bed-rest. Here we present the protocol of the study and discuss its implementation. Twenty-four male subjects underwent 60-days of six-degree head down tilt bed-rest and were randomised to an inactive control group (CTR), a high-load resistive exercise group (RE) or a high-load resistive exercise with whole-body vibration group (RVE). Subsequent to events in the course of the study (e.g. subject withdrawal), 9 subjects participated in the CTR-group, 7 in the RVE-group and 8 (7 beyond bed-rest day-30) in the RE-group. Fluid intake, urine output and axiallary temperature increased during bed-rest (p or = .17). Body weight changes differed between groups (p < .0001) with decreases in the CTR-group, marginal decreases in the RE-group and the RVE-group displaying significant decreases in body-weight beyond bed-rest day-51 only. In light of events and experiences of the current study, recommendations on various aspects of bed-rest methodology are also discussed

    On the spin-statistics connection in curved spacetimes

    Full text link
    The connection between spin and statistics is examined in the context of locally covariant quantum field theory. A generalization is proposed in which locally covariant theories are defined as functors from a category of framed spacetimes to a category of *-algebras. This allows for a more operational description of theories with spin, and for the derivation of a more general version of the spin-statistics connection in curved spacetimes than previously available. The proof involves a "rigidity argument" that is also applied in the standard setting of locally covariant quantum field theory to show how properties such as Einstein causality can be transferred from Minkowski spacetime to general curved spacetimes.Comment: 17pp. Contribution to the proceedings of the conference "Quantum Mathematical Physics" (Regensburg, October 2014

    Group evaluations as self-group distancing:Ingroup typicality moderates evaluative intergroup bias in stigmatized groups

    Get PDF
    Outgroup favoritism among members of stigmatized groups can be seen as a form of self-group distancing. We examined how intergroup evaluations in stigmatized groups vary as a function of ingroup typicality. In Studies 1 and 2, Black participants (N = 125,915;N = 766) more strongly preferred light-skinned or White relative to dark-skinned or Black individuals the lighter their own skin tone. In Study 3, overweight participants (N = 147,540) more strongly preferred normal-weight relative to overweight individuals the lower their own body weight. In Study 4, participants with disabilities (N = 35,058) more strongly preferred non-disabled relative to disabled individuals the less visible they judged their own disability. Relationships between ingroup typicality and intergroup evaluations were at least partially mediated by ingroup identification (Studies 2 and 3). A meta-analysis across studies yielded an average effect size ofr= .12. Furthermore, higher ingroup typicality was related to both ingroup and outgroup evaluations. We discuss ingroup typicality as an individual constraint to self-group distancing among stigmatized group members and its relation to intergroup evaluations

    Customisation of the Exome Data Analysis Pipeline Using a Combinatorial Approach

    Get PDF
    The advent of next generation sequencing (NGS) technologies have revolutionised the way biologists produce, analyse and interpret data. Although NGS platforms provide a cost-effective way to discover genome-wide variants from a single experiment, variants discovered by NGS need follow up validation due to the high error rates associated with various sequencing chemistries. Recently, whole exome sequencing has been proposed as an affordable option compared to whole genome runs but it still requires follow up validation of all the novel exomic variants. Customarily, a consensus approach is used to overcome the systematic errors inherent to the sequencing technology, alignment and post alignment variant detection algorithms. However, the aforementioned approach warrants the use of multiple sequencing chemistry, multiple alignment tools, multiple variant callers which may not be viable in terms of time and money for individual investigators with limited informatics know-how. Biologists often lack the requisite training to deal with the huge amount of data produced by NGS runs and face difficulty in choosing from the list of freely available analytical tools for NGS data analysis. Hence, there is a need to customise the NGS data analysis pipeline to preferentially retain true variants by minimising the incidence of false positives and make the choice of right analytical tools easier. To this end, we have sampled different freely available tools used at the alignment and post alignment stage suggesting the use of the most suitable combination determined by a simple framework of pre-existing metrics to create significant datasets

    Profiling allele-specific gene expression in brains from individuals with autism spectrum disorder reveals preferential minor allele usage.

    Get PDF
    One fundamental but understudied mechanism of gene regulation in disease is allele-specific expression (ASE), the preferential expression of one allele. We leveraged RNA-sequencing data from human brain to assess ASE in autism spectrum disorder (ASD). When ASE is observed in ASD, the allele with lower population frequency (minor allele) is preferentially more highly expressed than the major allele, opposite to the canonical pattern. Importantly, genes showing ASE in ASD are enriched in those downregulated in ASD postmortem brains and in genes harboring de novo mutations in ASD. Two regions, 14q32 and 15q11, containing all known orphan C/D box small nucleolar RNAs (snoRNAs), are particularly enriched in shifts to higher minor allele expression. We demonstrate that this allele shifting enhances snoRNA-targeted splicing changes in ASD-related target genes in idiopathic ASD and 15q11-q13 duplication syndrome. Together, these results implicate allelic imbalance and dysregulation of orphan C/D box snoRNAs in ASD pathogenesis

    High-resolution genetic mapping with pooled sequencing

    Get PDF
    Background: Modern genetics has been transformed by high-throughput sequencing. New experimental designs in model organisms involve analyzing many individuals, pooled and sequenced in groups for increased efficiency. However, the uncertainty from pooling and the challenge of noisy sequencing data demand advanced computational methods. Results: We present MULTIPOOL, a computational method for genetic mapping in model organism crosses that are analyzed by pooled genotyping. Unlike other methods for the analysis of pooled sequence data, we simultaneously consider information from all linked chromosomal markers when estimating the location of a causal variant. Our use of informative sequencing reads is formulated as a discrete dynamic Bayesian network, which we extend with a continuous approximation that allows for rapid inference without a dependence on the pool size. MULTIPOOL generalizes to include biological replicates and case-only or case-control designs for binary and quantitative traits. Conclusions: Our increased information sharing and principled inclusion of relevant error sources improve resolution and accuracy when compared to existing methods, localizing associations to single genes in several cases. MULTIPOOL is freely available at http://cgs.csail.mit.edu/multipool/ webcite.National Science Foundation (U.S.) (Graduate Research Fellowship Grant 0645960
    corecore