310 research outputs found

    A Phylogenomic Study of the Genus Alphavirus Employing Whole Genome Comparison

    Get PDF
    The phylogenetics of the genus Alphavirus have historically been characterized using partial gene, single gene or partial proteomic data. We have mined cDNA and amino acid sequences from GenBank for all fully sequenced and some partially sequenced alphaviruses and generated phylogenomic analyses of the genus Alphavirus genus, employing capsid encoding structural regions, non-structural coding regions and complete viral genomes. Our studies support the presence of the previously reported recombination event that produced the Western Equine Encephalitis clade, and confirm many of the patterns of geographic radiation and divergence of the multiple species. Our data suggest that the Salmon Pancreatic Disease Virus and Sleeping Disease Virus are sufficiently divergent to form a separate clade from the other alphaviruses. Also, unlike previously reported studies employing limited sequence data for correlation of phylogeny, our results indicate that the Barmah Forest Virus and Middelburg Virus appear to be members of the Semliki Forest clade. Additionally, our analysis indicates that the Southern Elephant Seal Virus is part of the Semliki Forest clade, although still phylogenetically distant from all known members of the genus Alphavirus. Finally, we demonstrate that the whole Rubella viral genome provides an ideal outgroup for phylogenomic studies of the genus Alphavirus

    Identification of functional genetic variation in exome sequence analysis

    Get PDF
    Recent technological advances have allowed us to study individual genomes at a base-pair resolution and have demonstrated that the average exome harbors more than 15,000 genetic variants. However, our ability to understand the biological significance of the identified variants and to connect these observed variants with phenotypes is limited. The first step in this process is to identify genetic variation that is likely to result in changes to protein structure and function, because detailed studies, either population based or functional, for each of the identified variants are not practicable. Therefore algorithms that yield valid predictions of a variant’s functional significance are needed. Over the past decade, several programs have been developed to predict the probability that an observed sequence variant will have a deleterious effect on protein function. These algorithms range from empirical programs that classify using known biochemical properties to statistical algorithms trained using a variety of data sources, including sequence conservation data, biochemical properties, and functional data. Using data from the pilot3 study of the 1000 Genomes Project available through Genetic Analysis Workshop 17, we compared the results of four programs (SIFT, PolyPhen, MAPP, and VarioWatch) used to predict the functional relevance of variants in 101 genes. Analysis was conducted without knowledge of the simulation model. Agreement between programs was modest ranging from 59.4% to 71.4% and only 3.5% of variants were classified as deleterious and 10.9% as tolerated across all four programs

    The influence of tertiary butyl hydrazine as a co-reactant on the atomic layer deposition of silver

    Get PDF
    Ultra-thin conformal silver films are the focus of development for applications such as anti-microbial surfaces, optical components and electronic devices. In this study, metallic silver films have been deposited using direct liquid injection thermal atomic layer deposition (ALD) using (hfac)Ag(1,5-COD) ((hexafluoroacetylacetonato)silver(I)(1,5-cyclooctadiene)) as the metal source and tertiary butyl hydrazine (TBH) as a co-reactant. The process provides a 23 °C wide ‘self-limiting’ ALD temperature window between 105 and 128 °C, which is significantly wider than is achievable using alcohol as a co-reactant. A mass deposition rate of ∼20 ng/cm2/cycle (∼0.18 Å/cycle) is observed under self-limiting growth conditions. The resulting films are crystalline metallic silver with a near planar film-like morphology which are electrically conductive. By extending the temperature range of the ALD window by the use of TBH as a co-reactant, it is envisaged that the process will be exploitable in a range of new low temperature applications

    Measuring Biodiversity and Extinction – Present and Past

    Get PDF
    How biodiversity is changing in our time represents a major concern for all organismal biologists. Anthropogenic changes to our planet are decreasing species diversity through the negative effects of pollution, habitat destruction, direct extirpation of species, and climate change. But major biotic changes – including those that have both increased and decreased species diversity – have happened before in Earth’s history. Biodiversity dynamics in past eras provide important context to understand ecological responses to current environmental change. The work of assessing biodiversity is woven into ecology, environmental science, conservation, paleontology, phylogenetics, evolutionary and developmental biology, and many other disciplines; yet, the absolute foundation of how we measure species diversity depends on taxonomy and systematics. The aspiration of this symposium, and complementary contributed talks, was to promote better understanding of our common goals and encourage future interdisciplinary discussion of biodiversity dynamics. The contributions in this collection of papers bring together a diverse group of speakers to confront several important themes. How can biologists best respond to the urgent need to identify and conserve diversity? How can we better communicate the nature of species across scientific disciplines? Where are the major gaps in knowledge about the diversity of living animal and plant groups, and what are the implications for understanding potential diversity loss? How can we effectively use the fossil record of past diversity and extinction to understand current biodiversity loss

    Cannabidiol regulation of learned fear: implications for treating anxiety-related disorders

    Get PDF
    Anxiety and trauma-related disorders are psychiatric diseases with a lifetime prevalence of up to 25%. Phobias and post-traumatic stress disorder (PTSD) are characterized by abnormal and persistent memories of fear-related contexts and cues. The effects of psychological treatments such as exposure therapy are often only temporary and medications can be ineffective and have adverse side effects. Growing evidence from human and animal studies indicates that cannabidiol, the main non-psychotomimetic phytocannabinoid present in Cannabis sativa, alleviates anxiety in paradigms assessing innate fear. More recently, the effects of cannabidiol on learned fear have been investigated in preclinical studies with translational relevance for phobias and PTSD. Here we review the findings from these studies, with an emphasis on cannabidiol regulation of contextual fear. The evidence indicates that cannabidiol reduces learned fear in different ways: (1) cannabidiol decreases fear expression acutely, (2) cannabidiol disrupts memory reconsolidation, leading to sustained fear attenuation upon memory retrieval, and (3) cannabidiol enhances extinction, the psychological process by which exposure therapy inhibits learned fear. We also present novel data on cannabidiol regulation of learned fear related to explicit cues, which indicates that auditory fear expression is also reduced acutely by cannabidiol. We conclude by outlining future directions for research to elucidate the neural circuit, psychological, cellular, and molecular mechanisms underlying the regulation of fear memory processing by cannabidiol. This line of investigation may lead to the development of cannabidiol as a novel therapeutic approach for treating anxiety and trauma-related disorders such as phobias and PTSD in the future

    Preparation of name and address data for record linkage using hidden Markov models

    Get PDF
    BACKGROUND: Record linkage refers to the process of joining records that relate to the same entity or event in one or more data collections. In the absence of a shared, unique key, record linkage involves the comparison of ensembles of partially-identifying, non-unique data items between pairs of records. Data items with variable formats, such as names and addresses, need to be transformed and normalised in order to validly carry out these comparisons. Traditionally, deterministic rule-based data processing systems have been used to carry out this pre-processing, which is commonly referred to as "standardisation". This paper describes an alternative approach to standardisation, using a combination of lexicon-based tokenisation and probabilistic hidden Markov models (HMMs). METHODS: HMMs were trained to standardise typical Australian name and address data drawn from a range of health data collections. The accuracy of the results was compared to that produced by rule-based systems. RESULTS: Training of HMMs was found to be quick and did not require any specialised skills. For addresses, HMMs produced equal or better standardisation accuracy than a widely-used rule-based system. However, acccuracy was worse when used with simpler name data. Possible reasons for this poorer performance are discussed. CONCLUSION: Lexicon-based tokenisation and HMMs provide a viable and effort-effective alternative to rule-based systems for pre-processing more complex variably formatted data such as addresses. Further work is required to improve the performance of this approach with simpler data such as names. Software which implements the methods described in this paper is freely available under an open source license for other researchers to use and improve
    corecore