34 research outputs found

    Chiral symmetry restoration and the Z3 sectors of QCD

    Full text link
    Quenched SU(3) lattice gauge theory shows three phase transitions, namely the chiral, the deconfinement and the Z3 phase transition. Knowing whether or not the chiral and the deconfinement phase transition occur at the same temperature for all Z3 sectors could be crucial to understand the underlying microscopic dynamics. We use the existence of a gap in the Dirac spectrum as an order parameter for the restoration of chiral symmetry. We find that the spectral gap opens up at the same critical temperature in all Z3 sectors in contrast to earlier claims in the literature.Comment: 4 pages, 4 figure

    Deconfining Phase Transition as a Matrix Model of Renormalized Polyakov Loops

    Full text link
    We discuss how to extract renormalized from bare Polyakov loops in SU(N) lattice gauge theories at nonzero temperature in four spacetime dimensions. Single loops in an irreducible representation are multiplicatively renormalized without mixing, through a renormalization constant which depends upon both representation and temperature. The values of renormalized loops in the four lowest representations of SU(3) were measured numerically on small, coarse lattices. We find that in magnitude, condensates for the sextet and octet loops are approximately the square of the triplet loop. This agrees with a large NN expansion, where factorization implies that the expectation values of loops in adjoint and higher representations are just powers of fundamental and anti-fundamental loops. For three colors, numerically the corrections to the large NN relations are greatest for the sextet loop, 25\leq 25%; these represent corrections of 1/N\sim 1/N for N=3. The values of the renormalized triplet loop can be described by an SU(3) matrix model, with an effective action dominated by the triplet loop. In several ways, the deconfining phase transition for N=3 appears to be like that in the N=N=\infty matrix model of Gross and Witten.Comment: 24 pages, 7 figures; v2, 27 pages, 12 figures, extended discussion for clarity, results unchange

    Modern temporal network theory: A colloquium

    Full text link
    The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions.Comment: Final accepted versio

    Integrating sequence and array data to create an improved 1000 Genomes Project haplotype reference panel

    Get PDF
    A major use of the 1000 Genomes Project (1000GP) data is genotype imputation in genome-wide association studies (GWAS). Here we develop a method to estimate haplotypes from low-coverage sequencing data that can take advantage of single-nucleotide polymorphism (SNP) microarray genotypes on the same samples. First the SNP array data are phased to build a backbone (or 'scaffold') of haplotypes across each chromosome. We then phase the sequence data 'onto' this haplotype scaffold. This approach can take advantage of relatedness between sequenced and non-sequenced samples to improve accuracy. We use this method to create a new 1000GP haplotype reference set for use by the human genetic community. Using a set of validation genotypes at SNP and bi-allelic indels we show that these haplotypes have lower genotype discordance and improved imputation performance into downstream GWAS samples, especially at low-frequency variants. © 2014 Macmillan Publishers Limited. All rights reserved

    The use of sequencing batch activated sludge reactors to determine nitrogen balances and optimum periods of pre-aeration denitrification

    No full text
    Four identical laboratory scale sequencing batch activated sludge plants were used to carry out comparative performance evaluations of nitrification and denitrification, and to obtain an accurate nitrogen balance for the system. In the first instance the plants were run under identical operational conditions to ensure that results were statistically valid. Ten performance parameters were compared and no significant differences at 95% confidence limits were found. A nitrogen mass balance, considering dissolved nitrogen species, waste biomass and denitrification losses during settlement, accounted for 87.4% of known inputs. On the introduction of denitrification periods, nitrate and total nitrogen removal increased with increasing anoxic period up to a maximum of 74% and 71% respectively. The inclusion of anoxic periods reduced total organic carbon (TOC) removal by as much as 19%. These losses are a consequence of maximising nitrate removal when the cycle duration is fixed. This is due to differences in efficiency between aerobic and denitrifying activity. Good linear relationships were shown between % nitrogen removal (r=0.93), effluent nitrate concentration (r=-0.94), T.O.C. removed (r=-0.99) and the ratio of anoxic to aerobic retention times; this was providing the anoxic period was taken to be the period of nitrate removal. These relationships may provide a guide for designing sequencing batch nitrification /denitrification systems. Some enhancement of nitrification was also evident at short denitrification phases

    The anaerobic treatment of a ligno-cellulosic substrate offering little natural pH buffering capacity

    No full text
    The stability and operational performance of single stage digestion with and without liquor recycle and two stage digestion were assessed using a mixture of paper and wood as the digestion substrate. Attempts to maintain stable digestion in both single stage reactors were unsuccessful due to the inherently low natural buffering capacity exhibited; this resulted in a rapid souring of the reactor due to unbuffered volatile fatty acid (VFA) accumulation. The use of lime to control pH was unsatisfactory due to interference with the carbonate/bicarbonate equilibrium resulting in wide oscillations in the control parameter. The two stage system overcame the pH stability problems allowing stable operation for a period of 200 days without any requirement for pH control; this was attributed to the rapid flushing of VFA from the first stage reactor into the second stage, where efficient conversion to methane was established. Reactor performance was judged to be satisfactory with the breakdown of 53% of influent volatile solids. It was concluded that the reactor configuration of the two stage system offers the potential for the treatment of cellulosic wastes with a suboptimal carbon to nitrogen ratio for conventional digestion

    Microbial treatment of low-level radioactive waste

    No full text
    corecore