2,193 research outputs found

    Sediment management for Southern California mountains, coastal plains and shoreline

    Get PDF
    The Environmental Quality Laboratory at Caltech and the Shore Processes Laboratory at Scripps Institution of Oceanography have jointly undertaken a study of regional sediment balance problems in coastal southern California (see map in Figure 1). The overall objective in this study is to define specific alternatives in sediment management that may be implemented to alleviate a) existing sediment imbalance problems (e.g. inland debris disposal, local shoreline erosion) and b) probable future problems that have not yet manifested themselves. These alternatives will be identified through a consideration of economic, legal, and institutional issues as well as an analysis of governing physical processes and engineering constraints. The first part of this study (Phase I), which is currently under way, involves a compilation and analysis of all available data in an effort to obtain an accurate definition of the inland/coastal regional sediment balance under natural conditions, and specific quantitative effects man-made controls have on the overall natural process. During FY77, substantial progress was made at EQL and SPL in achieving the objectives of the initial Planning and Assessment Phase of the CIT/SIO Sediment Management Project. Financial support came from Los Angeles County, U.S. Geological Survey, Orange County, U.S. Army Corps of Engineers, and discretionary funding provided by a grant from the Ford Foundation. The current timetable for completion of this phase is Fall 1978. This report briefly describes the project status, including general administration, special activities, and research work as of January 1978

    Roles of Diverse Stakeholders in Natural Resources Management and Their Relationships with Regional Bodies in New South Wales, Australia

    Get PDF
    Governments invest in natural resource management (NRM) because of a lack or failure of markets for ecosystem services and to encourage the adoption of NRM practices that reduce the externalities of resource use (Cary et al., 2002; Beare & Newby, 2005; Stanley et al., 2005). Major global trends in NRM include a greater emphasis on community participation, decentralised activity to the regional scale, a shift from government to governance and a narrowing of the framing of environment policy to a largely utilitarian concept of NRM (Lane et al., 2009). Successive state and national governments in Australia, in actively seeking to improve the condition of Australias natural resources, established a series of funding arrangements for their protection and enhancement (reviewed by Hajkowicz, 2009; Lockwood et al., 2009). In concert with this funding has been a greater emphasis on accountability for expenditure on public environmental programs because delivery of tangible impacts through recently established regional arrangements has proved difficult to quantify (eg. Australian National Audit Office, 2008)

    Optimization of Gene Prediction via More Accurate Phylogenetic Substitution Models

    Get PDF
    Determining the beginning and end positions of each exon in each protein coding gene within a genome can be difficult because the DNA patterns that signal a gene’s presence have multiple weakly related alternate forms and the DNA fragments that comprise a gene are generally small in comparison to the size of the genome. In response to this challenge, automated gene predictors were created to generate putative gene structures. N SCAN identifies gene structures in a target DNA sequence and can use conservation patterns learned from alignments between a target and one or more informant DNA sequences. N SCAN uses a Bayesian network, generated from a phylogenetic tree, to probabilistically relate the target sequence to the aligned sequence(s). Phylogenetic substitution models are used to estimate substitution likelihood along the branches of the tree. Although N SCAN’s predictive accuracy is already a benchmark for de novo HMM based gene predictors, optimizing its use of substitution models will allow for improved conservation pattern estimates leading to even better accuracy. Selecting optimal substitution models requires avoiding overfitting as more detailed models require more free parameters; unfortunately, the number of parameters is limited by the number of known genes available for parameter estimation (training). In order to optimize substitution model selection, we tested eight models on the entire genome including General, Reversible, HKY, Jukes-Cantor, and Kimura. In addition to testing models on the entire genome, genome feature based model selection strategies were investigated by assessing the ability of each model to accurately reflex the unique conservation patterns present in each genome region. Context dependency was examined using zeroth, first, and second order models. All models were tested on the human and D. melanogaster genomes. Analysis of the data suggests that the nucleotide equilibrium frequency assumption (denoted as πi) is the strongest predictor of a model’s accuracy, followed by reversibility and transition/transversion inequality. Furthermore, second order models are shown to give an average of 0.6% improvement over first order models, which give an 18% improvement over zeroth order models. Finally, by limiting parameter usage by the number of training examples available for each feature, genome feature based model selection better estimates substitution likelihood leading to a significant improvement in N SCAN’s gene annotation accuracy

    Sediment Management for Southern California Mountians, Coastal Plains and Shoreline. Part D: Special Inland Studies

    Get PDF
    In southern California the natural environmental system involves the continual relocation of sedimentary materials. Particles are eroded from inland areas where there is sufficient relief and, precipitation. Then, with reductions in hydraulic gradient along the stream course and at the shoreline, the velocity of surface runoff is reduced and there is deposition. Generally, coarse sand, gravel and larger particles are deposited near the base of the eroding surfaces (mountains and hills) and the finer sediments are deposited on floodplains, in bays or lagoons, and at the shoreline as delta deposits. Very fine silt and clay particles, which make up a significant part of the eroded material, are carried offshore where they eventually deposit in deeper areas. Sand deposited at the shoreline is gradually moved along the coast by waves and currents, and provides nourishment for local beaches. However, eventually much of this littoral material is also lost to offshore areas. Human developments in the coastal region have substantially altered the natural sedimentary processes, through changes in land use, the harvesting of natural resources (logging, grazing, and sand and gravel mining); the construction and operation of water conservation facilities and flood control structures; and coastal developments. In almost all cases these developments have grown out of recognized needs and have well served their primary purpose. At the time possible deleterious effects on the local or regional sediment balance were generally unforeseen or were felt to be of secondary importance. In 1975 a large-scale study of inland and coastal sedimentation processes in southern California was initiated by the Environmental Quality Laboratory at the California Institute of Technology and the Center for Coastal Studies at Scripps Institution of Oceanography. This volume is one of a series of reports from this study. Using existing data bases, this series attempts to define quantitatively inland and coastal sedimentation processes and identify the effects man has had on these processes. To resolve some issues related to long-term sediment management, additional research and data will be needed. In the series there are four Caltech reports that provide supporting studies for the summary report (EQL Report No. 17). These reports include: EQL Report 17-A Regional Geological History EQL Report 17-B Inland Sediment Movements by Natural Processes EQL Report 17-C Coastal Sediment Delivery by Major Rivers in Southern California EQL Report 17-D -- Special Inland Studies Additional supporting reports on coastal studies (shoreline sedimentation processes, control structures, dredging, etc.) are being published by the Center for Coastal Studies at Scripps Institution of Oceanography, La Jolla, California

    Pairagon+N-SCAN_EST: a model-based gene annotation pipeline

    Get PDF
    BACKGROUND: This paper describes Pairagon+N-SCAN_EST, a gene annotation pipeline that uses only native alignments. For each expressed sequence it chooses the best genomic alignment. Systems like ENSEMBL and ExoGean rely on trans alignments, in which expressed sequences are aligned to the genomic loci of putative homologs. Trans alignments contain a high proportion of mismatches, gaps, and/or apparently unspliceable introns, compared to alignments of cDNA sequences to their native loci. The Pairagon+N-SCAN_EST pipeline's first stage is Pairagon, a cDNA-to-genome alignment program based on a PairHMM probability model. This model relies on prior knowledge, such as the fact that introns must begin with GT, GC, or AT and end with AG or AC. It produces very precise alignments of high quality cDNA sequences. In the genomic regions between Pairagon's cDNA alignments, the pipeline combines EST alignments with de novo gene prediction by using N-SCAN_EST. N-SCAN_EST is based on a generalized HMM probability model augmented with a phylogenetic conservation model and EST alignments. It can predict complete transcripts by extending or merging EST alignments, but it can also predict genes in regions without EST alignments. Because they are based on probability models, both Pairagon and N-SCAN_EST can be trained automatically for new genomes and data sets. RESULTS: On the ENCODE regions of the human genome, Pairagon+N-SCAN_EST was as accurate as any other system tested in the EGASP assessment, including ENSEMBL and ExoGean. CONCLUSION: With sufficient mRNA/EST evidence, genome annotation without trans alignments can compete successfully with systems like ENSEMBL and ExoGean, which use trans alignments

    Determining conductivity and mobility values of individual components in multiphase composite Cu_(1.97)Ag_(0.03)Se

    Get PDF
    The intense interest in phase segregation in thermoelectrics as a means to reduce the lattice thermal conductivity and to modify the electronic properties from nanoscale size effects has not been met with a method for separately measuring the properties of each phase assuming a classical mixture. Here, we apply effective medium theory for measurements of the in-line and Hall resistivity of a multiphase composite, in this case Cu_(1.97) Ag_(0.03)Se. The behavior of these properties with magnetic field as analyzed by effective medium theory allows us to separate the conductivity and charge carrier mobility of each phase. This powerful technique can be used to determine the matrix properties in the presence of an unwanted impurity phase, to control each phase in an engineered composite, and to determine the maximum carrier concentration change by a given dopant, making it the first step toward a full optimization of a multiphase thermoelectric material and distinguishing nanoscale effects from those of a classical mixture

    Equivalence of Narcissistic Personality Inventory constructs and correlates across scoring approaches and response formats

    Get PDF
    The prevalent scoring practice for the Narcissistic Personality Inventory (NPI) ignores the forced-choice nature of the items. The aim of this study was to investigate whether findings based on NPI scores reported in previous research can be confirmed when the forced-choice nature of the NPI’s original response format is appropriately modeled, and when NPI items are presented in different response formats (true/false or rating scale). The relationships between NPI facets and various criteria were robust across scoring approaches (mean score vs. model-based), but were only partly robust across response formats. In addition, the scoring approaches and response formats achieved equivalent measurements of the vanity facet and in part of the leadership facet, but differed with respect to the entitlement facet

    Universal properties of correlation transfer in integrate-and-fire neurons

    Full text link
    One of the fundamental characteristics of a nonlinear system is how it transfers correlations in its inputs to correlations in its outputs. This is particularly important in the nervous system, where correlations between spiking neurons are prominent. Using linear response and asymptotic methods for pairs of unconnected integrate-and-fire (IF) neurons receiving white noise inputs, we show that this correlation transfer depends on the output spike firing rate in a strong, stereotyped manner, and is, surprisingly, almost independent of the interspike variance. For cells receiving heterogeneous inputs, we further show that correlation increases with the geometric mean spiking rate in the same stereotyped manner, greatly extending the generality of this relationship. We present an immediate consequence of this relationship for population coding via tuning curves

    Gene prediction and verification in a compact genome with numerous small introns

    Get PDF
    The genomes of clusters of related eukaryotes are now being sequenced at an increasing rate, creating a need for accurate, low-cost annotation of exon–intron structures. In this paper, we demonstrate that reverse transcription-polymerase chain reaction (RT–PCR) and direct sequencing based on predicted gene structures satisfy this need, at least for single-celled eukaryotes. The TWINSCAN gene prediction algorithm was adapted for the fungal pathogen Cryptococcus neoformans by using a precise model of intron lengths in combination with ungapped alignments between the genome sequences of the two closely related Cryptococcus varieties. This approach resulted in ∼60% of known genes being predicted exactly right at every coding base and splice site. When previously unannotated TWINSCAN predictions were tested by RT–PCR and direct sequencing, 75% of targets spanning two predicted introns were amplified and produced high-quality sequence. When targets spanning the complete predicted open reading frame were tested, 72% of them amplified and produced high-quality sequence. We conclude that sequencing a small number of expressed sequence tags (ESTs) to provide training data, running TWINSCAN on an entire genome, and then performing RT–PCR and direct sequencing on all of its predictions would be a cost-effective method for obtaining an experimentally verified genome annotation
    • …
    corecore