12,232 research outputs found

    Effect of Pyrolysis on the Removal of Antibiotic Resistance Genes and Class I Integrons from Municipal Wastewater Biosolids

    Get PDF
    Wastewater biosolids represent a significant reservoir of antibiotic resistance genes (ARGs). While current biosolids treatment technologies can reduce ARG levels in residual wastewater biosolids, observed removal rates vary substantially. Pyrolysis is an anoxic thermal degradation process that can be used to convert biosolids into energy rich products including py-gas and py-oil, and a beneficial soil amendment, biochar. Batch pyrolysis experiments conducted on municipal biosolids revealed that the 16S rRNA gene, the ARGs erm(B), sul1, tet(L), tet(O), and the integrase gene of class 1 integrons (intI1) were significantly reduced at pyrolysis temperatures ranging from 300–700 °C, as determined by quantitative polymerase chain reaction (qPCR). Pyrolysis of biosolids at 500 °C and higher resulted in approximately 6-log removal of the bacterial 16S rRNA gene. ARGs with the highest observed removals were sul1 and tet(O), which had observed reductions of 4.62 and 4.04-log, respectively. Pyrolysis reaction time had a significant impact on 16S rRNA, ARG and intI1 levels. A pyrolysis residence time of 5 minutes at 500 °C reduced all genes to below detection limits. These results demonstrate that pyrolysis could be implemented as a biosolids polishing treatment technology to substantially decrease the abundance of total bacteria (i.e., 16S rRNA), ARGs and intI1 prior to land application of municipal biosolids

    Stability of Noisy Metropolis-Hastings

    Get PDF
    Pseudo-marginal Markov chain Monte Carlo methods for sampling from intractable distributions have gained recent interest and have been theoretically studied in considerable depth. Their main appeal is that they are exact, in the sense that they target marginally the correct invariant distribution. However, the pseudo-marginal Markov chain can exhibit poor mixing and slow convergence towards its target. As an alternative, a subtly different Markov chain can be simulated, where better mixing is possible but the exactness property is sacrificed. This is the noisy algorithm, initially conceptualised as Monte Carlo within Metropolis (MCWM), which has also been studied but to a lesser extent. The present article provides a further characterisation of the noisy algorithm, with a focus on fundamental stability properties like positive recurrence and geometric ergodicity. Sufficient conditions for inheriting geometric ergodicity from a standard Metropolis-Hastings chain are given, as well as convergence of the invariant distribution towards the true target distribution

    Global consensus Monte Carlo

    Get PDF
    To conduct Bayesian inference with large data sets, it is often convenient or necessary to distribute the data across multiple machines. We consider a likelihood function expressed as a product of terms, each associated with a subset of the data. Inspired by global variable consensus optimisation, we introduce an instrumental hierarchical model associating auxiliary statistical parameters with each term, which are conditionally independent given the top-level parameters. One of these top-level parameters controls the unconditional strength of association between the auxiliary parameters. This model leads to a distributed MCMC algorithm on an extended state space yielding approximations of posterior expectations. A trade-off between computational tractability and fidelity to the original model can be controlled by changing the association strength in the instrumental model. We further propose the use of a SMC sampler with a sequence of association strengths, allowing both the automatic determination of appropriate strengths and for a bias correction technique to be applied. In contrast to similar distributed Monte Carlo algorithms, this approach requires few distributional assumptions. The performance of the algorithms is illustrated with a number of simulated examples

    Non-Natural Nucleotides As Probes For The Mechanism And Fidelity Of DNA Polymerases

    Get PDF
    DNA is a remarkable macromolecule that functions primarily as the carrier of the genetic information of organisms ranging from viruses to bacteria to eukaryotes. The ability of DNA polymerases to efficiently and accurately replicate genetic material represents one of the most fundamental yet complex biological processes found in nature. The central dogma of DNA polymerization is that the efficiency and fidelity of this biological process is dependent upon proper hydrogen-bonding interactions between an incoming nucleotide and its templating partner. However, the foundation of this dogma has been recently challenged by the demonstration that DNA polymerases can effectively and, in some cases, selectively incorporate non-natural nucleotides lacking classic hydrogen-bonding capabilities into DNA. In this review, we describe the results of several laboratories that have employed a variety of non-natural nucleotide analogs to decipher the molecular mechanism of DNA polymerization. The use of various non-natural nucleotides has lead to the development of several different models that can explain how efficient DNA synthesis can occur in the absence of hydrogen-bonding interactions. These models include the influence of steric fit and shape complementarity, hydrophobicity and solvation energies, base-stacking capabilities, and negative selection as alternatives to rules invoking simple recognition of hydrogen-bonding patterns. Discussions are also provided regarding how the kinetics of primer extension and exonuclease proofreading activities associated with high-fidelity DNA polymerases are influenced by the absence of hydrogen-bonding functional groups exhibited by non-natural nucleotides

    Near-Optimal Evasion of Convex-Inducing Classifiers

    Full text link
    Classifiers are often used to detect miscreant activities. We study how an adversary can efficiently query a classifier to elicit information that allows the adversary to evade detection at near-minimal cost. We generalize results of Lowd and Meek (2005) to convex-inducing classifiers. We present algorithms that construct undetected instances of near-minimal cost using only polynomially many queries in the dimension of the space and without reverse engineering the decision boundary.Comment: 8 pages; to appear at AISTATS'201

    Generalizing the German Tank Problem

    Get PDF
    The German Tank Problem dates back to World War II when the Allies used a statistical approach to estimate the number of enemy tanks produced or on the field from observed serial numbers after battles. Assuming that the tanks are labeled consecutively starting from 1, if we observe kk tanks from a total of NN tanks with the maximum observed tank being mm, then the best estimate for NN is m(1+1/k)−1m(1 + 1/k) - 1. We explore many generalizations. We looked at the discrete and continuous one dimensional case. We explored different estimators such as the LL\textsuperscript{th} largest tank, and applied motivation from portfolio theory and studied a weighted average; however, the original formula was the best. We generalized the problem in two dimensions, with pairs instead of points, studying the discrete and continuous square and circle variants. There were complications from curvature issues and that not every number is representable as a sum of two squares. We often concentrated on the large NN limit. For the discrete and continuous square, we tested various statistics, finding the largest observed component did best; the scaling factor for both cases is (2k+1)/2k(2k+1)/2k. The discrete case was especially involved because we had to use approximation formulas that gave us the number of lattice points inside the circle. Interestingly, the scaling factors were different for the cases. Lastly, we generalized the problem into LL dimensional squares and circles. The discrete and continuous square proved similar to the two dimensional square problem. However, for the LL\textsuperscript{th} dimensional circle, we had to use formulas for the volume of the LL-ball, and had to approximate the number of lattice points inside it. The formulas for the discrete circle were particularly interesting, as there was no LL dependence in the formula.Comment: Version 1.0, 47 page

    Adenosine Triphosphate-Dependent Degradation of A Fluorescent λ N Substrate Mimic by Lon Protease

    Get PDF
    Escherichia coli Lon exhibits a varying degree of energy requirement toward hydrolysis of different substrates. Efficient degradation of protein substrates requires the binding and hydrolysis of ATP such that the intrinsic ATPase of Lon is enhanced during protein degradation. Degradation of synthetic tetrapeptides, by contrast, is achieved solely by ATP binding with concomitant inhibition of the ATPase activity. In this study, a synthetic peptide (FRETN 89-98), containing residues 89–98 of λ N protein and a fluorescence donor (anthranilamide) and quencher (3-nitrotyrosine), has been examined for ATP-dependent degradation by E. coli and human Lon proteases. The cleavage profile of FRETN 89-98 by E. coli Lon resembles that of λ N degradation. Both the peptide and protein substrates are specifically cleaved between Cys93 and Ser94 with concomitant stimulation of Lon\u27s ATPase activity. Furthermore, the degradation of FRETN 89-98 is supported by ATP and AMPPNP but not ATPγS nor AMPPCP. FRETN 89-98 hydrolysis is eight times more efficient in the presence of 0.5 mM ATP compared to 0.5 mM AMPPNP at 86 μM peptide. The ATP-dependent hydrolysis of FRETN 89-98 displays sigmodial kinetics. The kcat, [S]0.5, and the Hill coefficient of FRETN 89-98 degradation are 3.2 ± 0.3 s−1, 106 ± 21 μM, and 1.6 respectively

    Fluorescent Analysis of Translesion DNA Synthesis by Using A Novel, Non-natural Nucleotide Analogue

    Get PDF
    The replication of damaged DNA is a promutagenic process that can lead to disease development. This report evaluates the dynamics of nucleotide incorporation opposite an abasic site, a commonly formed DNA lesion, by using two fluorescent nucleotide analogues, 2-aminopurine deoxyribose triphosphate (2-APTP) and 5-phenylindole deoxyribose triphosphate (5-PhITP). In both cases, the kinetics of incorporation were compared by using a 32 P-radiolabel extension assay versus a fluorescence-quenching assay. Although 2-APTP is efficiently incorporated opposite a templating nucleobase (thymine), the kinetics for incorporation opposite an abasic site are significantly slower. The lower catalytic efficiency hinders its use as a probe to study translesion DNA synthesis. In contrast, the rate constant for the incorporation of 5-PhITP opposite the DNA lesion is 100-fold faster than that for 2- APTP. Nearly identical kinetic parameters are obtained from fluorescence quenching or the 32 P-radiolabel assay. Surprisingly, distinct differences in the kinetics of 5-PhITP incorporation opposite the DNA lesion are detected when using either bacteriophage T4 DNA polymerase or the Escherichia coli Klenow fragment. These differences suggest that the dynamics of nucleotide incorporation opposite an abasic site are polymerase-dependent. Collectively, these data indicate that 5-PhITP can be used to perform real time analyses of translesion DNA synthesis as well as to functionally probe differences in polymerase function
    • …
    corecore