2,372 research outputs found

    Formation and Dissociation of Phosphorylated Peptide Radical Cations

    Get PDF
    published_or_final_versio

    Characterisation and durability of contemporary unsized Xuan paper

    Get PDF
    In China, Xuan paper has been the paper of choice as artwork support and for conservation, for several centuries. However, little is known about its material properties, especially given the many grades of sized and unsized Xuan paper. In addition, there is a lack of information on its degradation. In this research, a selection of contemporary unsized Xuan papers was investigated, representing diverse raw materials. Seven out of twelve contemporary unsized Xuan papers were determined to be approximately neutral and contain > 2% alkaline reserve, indicating good durability. Viscometry was used to determine the degree of polymerisation (DP) as none of the samples gave significant reactions to the phloroglucinol spot test. The average DP of ten contemporary unsized Xuan papers is ~ 1700, excluding two papers that have presumably been sun-bleached, and that exhibit significantly lower DP. Using X-ray fluorescence, it can be demonstrated that Ca and Si are the dominant elements and interestingly, Ca content is directly correlated with ash content and with alkaline reserve. Accelerated degradation was performed at two sets of environmental conditions, i.e. 90 °C, 30% RH and 60 °C, 70% RH, and the established degradation rates agreed with the Collections Demography model of paper degradation meaning that degradation of Xuan papers proceeds in the same way as other types of paper. This research gives fundamental insights into contemporary unsized Xuan papers, which exhibit good stability during accelerated degradation despite the low starting DP in the context of the samples used in this study. Our findings may inform methods of Xuan paper production, selection of Xuan paper for conservation purposes, as well as preventive conservation of Xuan paper-based artefacts

    A class of efficient high-order iterative methods with memory for nonlinear equations and their dynamics

    Full text link
    [EN] In this paper we obtain some theoretical results about iterative methods with memory for nonlinear equations. The class of algorithms we consider focus on incorporating memory without increasing the computational cost of the algorithm. This class uses for the predictor step of each iteration a quantity that has already been calculated in the previous iteration, typically the quantity governing the slope from the previous corrector step. In this way we do not introduce any extra computation, and more importantly, we avoid new function evaluations, allowing us to obtain high-order iterative methods in a simple way. A specific class of methods of this type is introduced, and we prove the convergence order is 2(n) + 2(n-2) with n + 1 function evaluations. An exhaustive efficiency study is performed to show the competitiveness of these methods. Finally, we test some specific examples and explore the effect that this predictor may have on the convergence set by setting a dynamical study.Ministerio de Economia y Competitividad de Espana, Grant/Award Number: MTM2014-52016-C2-2-P; Generalitat Valenciana Prometeo, Grant/Award Number: /2016/089Howk, CL.; Hueso, J.; MartĂ­nez Molada, E.; Teruel-Ferragud, C. (2018). A class of efficient high-order iterative methods with memory for nonlinear equations and their dynamics. Mathematical Methods in the Applied Sciences. 41(17):7263-7282. https://doi.org/10.1002/mma.4821S72637282411

    Euler-Heisenberg lagrangians and asymptotic analysis in 1+1 QED, part 1: Two-loop

    Full text link
    We continue an effort to obtain information on the QED perturbation series at high loop orders, and particularly on the issue of large cancellations inside gauge invariant classes of graphs, using the example of the l - loop N - photon amplitudes in the limit of large photons numbers and low photon energies. As was previously shown, high-order information on these amplitudes can be obtained from a nonperturbative formula, due to Affleck et al., for the imaginary part of the QED effective lagrangian in a constant field. The procedure uses Borel analysis and leads, under some plausible assumptions, to a number of nontrivial predictions already at the three-loop level. Their direct verification would require a calculation of this `Euler-Heisenberg lagrangian' at three-loops, which seems presently out of reach. Motivated by previous work by Dunne and Krasnansky on Euler-Heisenberg lagrangians in various dimensions, in the present work we initiate a new line of attack on this problem by deriving and proving the analogous predictions in the simpler setting of 1+1 dimensional QED. In the first part of this series, we obtain a generalization of the formula of Affleck et al. to this case, and show that, for both Scalar and Spinor QED, it correctly predicts the leading asymptotic behaviour of the weak field expansion coefficients of the two loop Euler-Heisenberg lagrangians.Comment: 28 pages, 1 figures, final published version (minor modifications, refs. added

    Digging into the extremes: a useful approach for the analysis of rare variants with continuous traits?

    Get PDF
    The common disease/rare variant hypothesis predicts that rare variants with large effects will have a strong impact on corresponding phenotypes. Therefore it is assumed that rare functional variants are enriched in the extremes of the phenotype distribution. In this analysis of the Genetic Analysis Workshop 17 data set, my aim is to detect genes with rare variants that are associated with quantitative traits using two general approaches: analyzing the association with the complete distribution of values by means of linear regression and using statistical tests based on the tails of the distribution (bottom 10% of values versus top 10%). Three methods are used for this extreme phenotype approach: Fisher’s exact test, weighted-sum method, and beta method. Rare variants were collapsed on the gene level. Linear regression including all values provided the highest power to detect rare variants. Of the three methods used in the extreme phenotype approach, the beta method performed best. Furthermore, the sample size was enriched in this approach by adding additional samples with extreme phenotype values. Doubling the sample size using this approach, which corresponds to only 40% of sample size of the original continuous trait, yielded a comparable or even higher power than linear regression. If samples are selected primarily for sequencing, enriching the analysis by gathering a greater proportion of individuals with extreme values in the phenotype of interest rather than in the general population leads to a higher power to detect rare variants compared to analyzing a population-based sample with equivalent sample size
    • …
    corecore