109 research outputs found

    Novel Inducers of the Envelope Stress Response BaeSR in Salmonella Typhimurium: BaeR Is Critically Required for Tungstate Waste Disposal

    Get PDF
    The RpoE and CpxR regulated envelope stress responses are extremely important for SalmonellaTyphimurium to cause infection in a range of hosts. Until now the role for BaeSR in both the Salmonella Typhimurium response to stress and its contribution to infection have not been fully elucidated. Here we demonstrate stationary phase growth, iron and sodium tungstate as novel inducers of the BaeRregulon, with BaeR critically required for Salmonella resistance to sodium tungstate. We show that functional overlap between the resistance nodulation-cell division (RND) multidrug transporters, MdtA, AcrD and AcrB exists for the waste disposal of tungstate from the cell. We also point to a role for enterobactinsiderophores in the protection of enteric organisms from tungstate, akin to the scenario in nitrogen fixing bacteria. Surprisingly, BaeR is the first envelope stress response pathway investigated in S. Typhimurium that is not required for murine typhoid in either ityS or ityR mouse backgrounds. BaeR is therefore either required for survival in larger mammals such as pigs or calves, an avian host such as chickens, or survival out with the host altogether where Salmonella and related enterics must survive in soil and water

    Genome-Wide Discovery of Putative sRNAs in Paracoccus denitrificans Expressed under Nitrous Oxide Emitting Conditions

    Get PDF
    Nitrous oxide (N2O) is a stable, ozone depleting greenhouse gas. Emissions of N2O into the atmosphere continue to rise, primarily due to the use of nitrogen-containing fertilizers by soil denitrifying microbes. It is clear more effective mitigation strategies are required to reduce emissions. One way to help develop future mitigation strategies is to address the currently poor understanding of transcriptional regulation of the enzymes used to produce and consume N2O. With this ultimate aim in mind we performed RNA-seq on a model soil denitrifier, Paracoccus denitrificans, cultured anaerobically under high N2O and low N2O emitting conditions, and aerobically under zero N2O emitting conditions to identify small RNAs (sRNAs) with potential regulatory functions transcribed under these conditions. sRNAs are short (∼40–500 nucleotides) non-coding RNAs that regulate a wide range of activities in many bacteria. Hundred and sixty seven sRNAs were identified throughout the P. denitrificans genome which are either present in intergenic regions or located antisense to ORFs. Furthermore, many of these sRNAs are differentially expressed under high N2O and low N2O emitting conditions respectively, suggesting they may play a role in production or reduction of N2O. Expression of 16 of these sRNAs have been confirmed by RT-PCR. Ninety percent of the sRNAs are predicted to form secondary structures. Predicted targets include transporters and a number of transcriptional regulators. A number of sRNAs were conserved in other members of the α-proteobacteria. Better understanding of the sRNA factors which contribute to expression of the machinery required to reduce N2O will, in turn, help to inform strategies for mitigation of N2O emissions

    Independent evolution of neurotoxin and flagellar genetic loci in proteolytic Clostridium botulinum

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Proteolytic <it>Clostridium botulinum </it>is the causative agent of botulism, a severe neuroparalytic illness. Given the severity of botulism, surprisingly little is known of the population structure, biology, phylogeny or evolution of <it>C. botulinum</it>. The recent determination of the genome sequence of <it>C. botulinum </it>has allowed comparative genomic indexing using a DNA microarray.</p> <p>Results</p> <p>Whole genome microarray analysis revealed that 63% of the coding sequences (CDSs) present in reference strain ATCC 3502 were common to all 61 widely-representative strains of proteolytic <it>C. botulinum </it>and the closely related <it>C. sporogenes </it>tested. This indicates a relatively stable genome. There was, however, evidence for recombination and genetic exchange, in particular within the neurotoxin gene and cluster (including transfer of neurotoxin genes to <it>C. sporogenes</it>), and the flagellar glycosylation island (FGI). These two loci appear to have evolved independently from each other, and from the remainder of the genetic complement. A number of strains were atypical; for example, while 10 out of 14 strains that formed type A1 toxin gave almost identical profiles in whole genome, neurotoxin cluster and FGI analyses, the other four strains showed divergent properties. Furthermore, a new neurotoxin sub-type (A5) has been discovered in strains from heroin-associated wound botulism cases. For the first time, differences in glycosylation profiles of the flagella could be linked to differences in the gene content of the FGI.</p> <p>Conclusion</p> <p>Proteolytic <it>C. botulinum </it>has a stable genome backbone containing specific regions of genetic heterogeneity. These include the neurotoxin gene cluster and the FGI, each having evolved independently of each other and the remainder of the genetic complement. Analysis of these genetic components provides a high degree of discrimination of strains of proteolytic <it>C. botulinum</it>, and is suitable for clinical and forensic investigations of botulism outbreaks.</p

    A systematic policy approach to changing the food system and physical activity environments to prevent obesity

    Get PDF
    As obesity prevention becomes an increasing health priority in many countries, including Australia and New Zealand, the challenge that governments are now facing is how to adopt a systematic policy approach to increase healthy eating and regular physical activity. This article sets out a structure for systematically identifying areas for obesity prevention policy action across the food system and full range of physical activity environments. Areas amenable to policy intervention can be systematically identified by considering policy opportunities for each level of governance (local, state, national, international and organisational) in each sector of the food system (primary production, food processing, distribution, marketing, retail, catering and food service) and each sector that influences physical activity environments (infrastructure and planning, education, employment, transport, sport and recreation). Analysis grids are used to illustrate, in a structured fashion, the broad array of areas amenable to legal and regulatory intervention across all levels of governance and all relevant sectors. In the Australian context, potential regulatory policy intervention areas are widespread throughout the food system, e.g., land-use zoning (primary production within local government), food safety (food processing within state government), food labelling (retail within national government). Policy areas for influencing physical activity are predominantly local and state government responsibilities including, for example, walking and cycling environments (infrastructure and planning sector) and physical activity education in schools (education sector). The analysis structure presented in this article provides a tool to systematically identify policy gaps, barriers and opportunities for obesity prevention, as part of the process of developing and implementing a comprehensive obesity prevention strategy. It also serves to highlight the need for a coordinated approach to policy development and implementation across all levels of government in order to ensure complementary policy action

    BABAR: an R package to simplify the normalisation of common reference design microarray-based transcriptomic datasets

    Get PDF
    Background: The development of DNA microarrays has facilitated the generation of hundreds of thousands of transcriptomic datasets. The use of a common reference microarray design allows existing transcriptomic data to be readily compared and re-analysed in the light of new data, and the combination of this design with large datasets is ideal for 'systems' level analyses. One issue is that these datasets are typically collected over many years and may be heterogeneous in nature, containing different microarray file formats and gene array layouts, dye-swaps, and showing varying scales of log(2)- ratios of expression between microarrays. Excellent software exists for the normalisation and analysis of microarray data but many data have yet to be analysed as existing methods struggle with heterogeneous datasets; options include normalising microarrays on an individual or experimental group basis. Our solution was to develop the Batch Anti-Banana Algorithm in R (BABAR) algorithm and software package which uses cyclic loess to normalise across the complete dataset. We have already used BABAR to analyse the function of Salmonella genes involved in the process of infection of mammalian cells. Results: The only input required by BABAR is unprocessed GenePix or BlueFuse microarray data files. BABAR provides a combination of 'within' and 'between' microarray normalisation steps and diagnostic boxplots. When applied to a real heterogeneous dataset, BABAR normalised the dataset to produce a comparable scaling between the microarrays, with the microarray data in excellent agreement with RT-PCR analysis. When applied to a real non-heterogeneous dataset and a simulated dataset, BABAR's performance in identifying differentially expressed genes showed some benefits over standard techniques. Conclusions: BABAR is an easy-to-use software tool, simplifying the simultaneous normalisation of heterogeneous two-colour common reference design cDNA microarray-based transcriptomic datasets. We show BABAR transforms real and simulated datasets to allow for the correct interpretation of these data, and is the ideal tool to facilitate the identification of differentially expressed genes or network inference analysis from transcriptomic datasets

    Recommendation of short tandem repeat profiling for authenticating human cell lines, stem cells, and tissues

    Get PDF
    Cell misidentification and cross-contamination have plagued biomedical research for as long as cells have been employed as research tools. Examples of misidentified cell lines continue to surface to this day. Efforts to eradicate the problem by raising awareness of the issue and by asking scientists voluntarily to take appropriate actions have not been successful. Unambiguous cell authentication is an essential step in the scientific process and should be an inherent consideration during peer review of papers submitted for publication or during review of grants submitted for funding. In order to facilitate proper identity testing, accurate, reliable, inexpensive, and standardized methods for authentication of cells and cell lines must be made available. To this end, an international team of scientists is, at this time, preparing a consensus standard on the authentication of human cells using short tandem repeat (STR) profiling. This standard, which will be submitted for review and approval as an American National Standard by the American National Standards Institute, will provide investigators guidance on the use of STR profiling for authenticating human cell lines. Such guidance will include methodological detail on the preparation of the DNA sample, the appropriate numbers and types of loci to be evaluated, and the interpretation and quality control of the results. Associated with the standard itself will be the establishment and maintenance of a public STR profile database under the auspices of the National Center for Biotechnology Information. The consensus standard is anticipated to be adopted by granting agencies and scientific journals as appropriate methodology for authenticating human cell lines, stem cells, and tissues

    A stable genetic polymorphism underpinning microbial syntrophy

    Get PDF
    Syntrophies are metabolic cooperations, whereby two organisms co-metabolize a substrate in an interdependent manner. Many of the observed natural syntrophic interactions are mandatory in the absence of strong electron acceptors, such that one species in the syntrophy has to assume the role of electron sink for the other. While this presents an ecological setting for syntrophy to be beneficial, the potential genetic drivers of syntrophy remain unknown to date. Here, we show that the syntrophic sulfate-reducing species Desulfovibrio vulgaris displays a stable genetic polymorphism, where only a specific genotype is able to engage in syntrophy with the hydrogenotrophic methanogen Methanococcus maripaludis. This 'syntrophic' genotype is characterized by two genetic alterations, one of which is an in-frame deletion in the gene encoding for the ion-translocating subunit cooK of the membrane-bound COO hydrogenase. We show that this genotype presents a specific physiology, in which reshaping of energy conservation in the lactate oxidation pathway enables it to produce sufficient intermediate hydrogen for sustained M. maripaludis growth and thus, syntrophy. To our knowledge, these findings provide for the first time a genetic basis for syntrophy in nature and bring us closer to the rational engineering of syntrophy in synthetic microbial communities

    The History and Prehistory of Natural-Language Semantics

    Get PDF
    Contemporary natural-language semantics began with the assumption that the meaning of a sentence could be modeled by a single truth condition, or by an entity with a truth-condition. But with the recent explosion of dynamic semantics and pragmatics and of work on non- truth-conditional dimensions of linguistic meaning, we are now in the midst of a shift away from a truth-condition-centric view and toward the idea that a sentence’s meaning must be spelled out in terms of its various roles in conversation. This communicative turn in semantics raises historical questions: Why was truth-conditional semantics dominant in the first place, and why were the phenomena now driving the communicative turn initially ignored or misunderstood by truth-conditional semanticists? I offer a historical answer to both questions. The history of natural-language semantics—springing from the work of Donald Davidson and Richard Montague—began with a methodological toolkit that Frege, Tarski, Carnap, and others had created to better understand artificial languages. For them, the study of linguistic meaning was subservient to other explanatory goals in logic, philosophy, and the foundations of mathematics, and this subservience was reflected in the fact that they idealized away from all aspects of meaning that get in the way of a one-to-one correspondence between sentences and truth-conditions. The truth-conditional beginnings of natural- language semantics are best explained by the fact that, upon turning their attention to the empirical study of natural language, Davidson and Montague adopted the methodological toolkit assembled by Frege, Tarski, and Carnap and, along with it, their idealization away from non-truth-conditional semantic phenomena. But this pivot in explana- tory priorities toward natural language itself rendered the adoption of the truth-conditional idealization inappropriate. Lifting the truth-conditional idealization has forced semanticists to upend the conception of linguistic meaning that was originally embodied in their methodology
    corecore