1,379 research outputs found

    Federal Terrorism Risk Insurance

    Get PDF
    The terrorist attacks of September 11, 2001 represented a loss for commercial property & casualty insurers that was both unprecedented and unanticipated. After sustaining this record capital loss, the availability of adequate private insurance coverage against future terrorist attacks came into question. Concern over the potential adverse consequences of the lack of availability of insurance against terrorist incidents led to calls for federal intervention in insurance markets. This paper discusses the economic rationale for and against federal intervention in the market, and concludes that the benefits from establishing a temporary transition program, during which the private sector can build capacity and adapt to a dramatically changed environment for terrorism risk, may provide benefits to the economy that exceed the direct and indirect costs.

    Optimization of Gene Prediction via More Accurate Phylogenetic Substitution Models

    Get PDF
    Determining the beginning and end positions of each exon in each protein coding gene within a genome can be difficult because the DNA patterns that signal a gene’s presence have multiple weakly related alternate forms and the DNA fragments that comprise a gene are generally small in comparison to the size of the genome. In response to this challenge, automated gene predictors were created to generate putative gene structures. N SCAN identifies gene structures in a target DNA sequence and can use conservation patterns learned from alignments between a target and one or more informant DNA sequences. N SCAN uses a Bayesian network, generated from a phylogenetic tree, to probabilistically relate the target sequence to the aligned sequence(s). Phylogenetic substitution models are used to estimate substitution likelihood along the branches of the tree. Although N SCAN’s predictive accuracy is already a benchmark for de novo HMM based gene predictors, optimizing its use of substitution models will allow for improved conservation pattern estimates leading to even better accuracy. Selecting optimal substitution models requires avoiding overfitting as more detailed models require more free parameters; unfortunately, the number of parameters is limited by the number of known genes available for parameter estimation (training). In order to optimize substitution model selection, we tested eight models on the entire genome including General, Reversible, HKY, Jukes-Cantor, and Kimura. In addition to testing models on the entire genome, genome feature based model selection strategies were investigated by assessing the ability of each model to accurately reflex the unique conservation patterns present in each genome region. Context dependency was examined using zeroth, first, and second order models. All models were tested on the human and D. melanogaster genomes. Analysis of the data suggests that the nucleotide equilibrium frequency assumption (denoted as πi) is the strongest predictor of a model’s accuracy, followed by reversibility and transition/transversion inequality. Furthermore, second order models are shown to give an average of 0.6% improvement over first order models, which give an 18% improvement over zeroth order models. Finally, by limiting parameter usage by the number of training examples available for each feature, genome feature based model selection better estimates substitution likelihood leading to a significant improvement in N SCAN’s gene annotation accuracy

    Bacula of Some Neotropical Bats

    Get PDF
    Neotropical bats representing 56 species of five families were examined for presence or absence of a baculum. Members of the families Noctilionidae and Phyllostomatidae (sensu lato) have no os penis, but the bone was found in all species studied of Emballonuridae, Natalidae, and Vespertilionidae. The bacula of seven emballonurids, one natalid, and seven vespertilionids are briefly described and figured

    Pairagon+N-SCAN_EST: a model-based gene annotation pipeline

    Get PDF
    BACKGROUND: This paper describes Pairagon+N-SCAN_EST, a gene annotation pipeline that uses only native alignments. For each expressed sequence it chooses the best genomic alignment. Systems like ENSEMBL and ExoGean rely on trans alignments, in which expressed sequences are aligned to the genomic loci of putative homologs. Trans alignments contain a high proportion of mismatches, gaps, and/or apparently unspliceable introns, compared to alignments of cDNA sequences to their native loci. The Pairagon+N-SCAN_EST pipeline's first stage is Pairagon, a cDNA-to-genome alignment program based on a PairHMM probability model. This model relies on prior knowledge, such as the fact that introns must begin with GT, GC, or AT and end with AG or AC. It produces very precise alignments of high quality cDNA sequences. In the genomic regions between Pairagon's cDNA alignments, the pipeline combines EST alignments with de novo gene prediction by using N-SCAN_EST. N-SCAN_EST is based on a generalized HMM probability model augmented with a phylogenetic conservation model and EST alignments. It can predict complete transcripts by extending or merging EST alignments, but it can also predict genes in regions without EST alignments. Because they are based on probability models, both Pairagon and N-SCAN_EST can be trained automatically for new genomes and data sets. RESULTS: On the ENCODE regions of the human genome, Pairagon+N-SCAN_EST was as accurate as any other system tested in the EGASP assessment, including ENSEMBL and ExoGean. CONCLUSION: With sufficient mRNA/EST evidence, genome annotation without trans alignments can compete successfully with systems like ENSEMBL and ExoGean, which use trans alignments

    Evaluation of changes in microbial populations on beef carcasses resulting from steam pasteurization

    Get PDF
    The steam pasteurization process (SPS 400) developed by Frigoscandia Food Process Systems (Bellevue, WA) was effective in reducing bacterial populations in both laboratory and commercial settings. The objective of steam pasteurization and other meat decontamination measures is to extend product shelf life and improve safety by inhibiting or inactivating pathogens, while at the same time maintaining acceptable meat quality characteristics. The effects of steam pasteurization on beef carcass bacterial populations were evaluated at two large commercial beef processing facilities. A shelf-life study also was conducted to determine the microbial profiles of vacuum packaged beef loins from pasteurized and non-pasteurized carcasses. Steam pasteurization greatly reduced total beef carcass bacterial populations and was most effective in reducing gram negative organisms, including potential enteric pathogens of fecal origin. Thus, the relative percentage of gram positive microflora on beef carcass surfaces, especially Bacillus spp. and Staphylococcus spp., increased

    Gene prediction and verification in a compact genome with numerous small introns

    Get PDF
    The genomes of clusters of related eukaryotes are now being sequenced at an increasing rate, creating a need for accurate, low-cost annotation of exon–intron structures. In this paper, we demonstrate that reverse transcription-polymerase chain reaction (RT–PCR) and direct sequencing based on predicted gene structures satisfy this need, at least for single-celled eukaryotes. The TWINSCAN gene prediction algorithm was adapted for the fungal pathogen Cryptococcus neoformans by using a precise model of intron lengths in combination with ungapped alignments between the genome sequences of the two closely related Cryptococcus varieties. This approach resulted in ∌60% of known genes being predicted exactly right at every coding base and splice site. When previously unannotated TWINSCAN predictions were tested by RT–PCR and direct sequencing, 75% of targets spanning two predicted introns were amplified and produced high-quality sequence. When targets spanning the complete predicted open reading frame were tested, 72% of them amplified and produced high-quality sequence. We conclude that sequencing a small number of expressed sequence tags (ESTs) to provide training data, running TWINSCAN on an entire genome, and then performing RT–PCR and direct sequencing on all of its predictions would be a cost-effective method for obtaining an experimentally verified genome annotation

    Locality in Theory Space

    Get PDF
    Locality is a guiding principle for constructing realistic quantum field theories. Compactified theories offer an interesting context in which to think about locality, since interactions can be nonlocal in the compact directions while still being local in the extended ones. In this paper, we study locality in "theory space", four-dimensional Lagrangians which are dimensional deconstructions of five-dimensional Yang-Mills. In explicit ultraviolet (UV) completions, one can understand the origin of theory space locality by the irrelevance of nonlocal operators. From an infrared (IR) point of view, though, theory space locality does not appear to be a special property, since the lowest-lying Kaluza-Klein (KK) modes are simply described by a gauged nonlinear sigma model, and locality imposes seemingly arbitrary constraints on the KK spectrum and interactions. We argue that these constraints are nevertheless important from an IR perspective, since they affect the four-dimensional cutoff of the theory where high energy scattering hits strong coupling. Intriguingly, we find that maximizing this cutoff scale implies five-dimensional locality. In this way, theory space locality is correlated with weak coupling in the IR, independent of UV considerations. We briefly comment on other scenarios where maximizing the cutoff scale yields interesting physics, including theory space descriptions of QCD and deconstructions of anti-de Sitter space.Comment: 40 pages, 11 figures; v2: references and clarifications added; v3: version accepted by JHE

    Deconstructing Gaugino Mediation

    Get PDF
    We present a model of supersymmetry breaking which produces gaugino masses and negligible scalar masses at a high scale. The model is inspired by ``deconstructing'' or ``latticizing'' models in extra dimensions where supersymmetry breaking and visible matter are spatially separated. We find a simple four-dimensional model which only requires two lattice sites (or gauge groups) to reproduce the phenomenology.Comment: LaTeX, 9 pages, acknowledgements adde

    Advances in Measuring the Apparent Optical Properties (AOPs) of Optically Complex Waters

    Get PDF
    This report documents new technology used to measure the apparent optical properties (AOPs) of optically complex waters. The principal objective is to be prepared for the launch of next-generation ocean color satellites with the most capable commercial off-the-shelf (COTS) instrumentation. An enhanced COTS radiometer was the starting point for designing and testing the new sensors. The follow-on steps were to apply the lessons learned towards a new in-water profiler based on a kite-shaped backplane for mounting the light sensors. The next level of sophistication involved evaluating new radiometers emerging from a development activity based on so-called microradiometers. The exploitation of microradiometers resulted in an in-water profiling system, which includes a sensor networking capability to control ancillary sensors like a shadowband or global positioning system (GPS) device. A principal advantage of microradiometers is their flexibility in producing, interconnecting, and maintaining instruments. The full problem set for collecting sea-truth data--whether in coastal waters or the open ocean-- involves other aspects of data collection that were improved for instruments measuring both AOPs and inherent optical properties (IOPs), if the uncertainty budget is to be minimized. New capabilities associated with deploying solar references were developed as well as a compact solution for recovering in-water instrument systems from small boats
    • 

    corecore