3,698 research outputs found

    Logarithmic correction to the Bekenstein-Hawking entropy of the BTZ black hole

    Get PDF
    We derive an exact expression for the partition function of the Euclidean BTZ black hole. Using this, we show that for a black hole with large horizon area, the correction to the Bekenstein-Hawking entropy is 3/2log(Area)-3/2 log(Area), in agreement with that for the Schwarzschild black hole obtained in the canonical gravity formalism and also in a Lorentzian computation of BTZ black hole entropy. We find that the right expression for the logarithmic correction in the context of the BTZ black hole comes from the modular invariance associated with the toral boundary of the black hole.Comment: LaTeX, 10 pages, typos corrected, clarifications adde

    Quantum phase transitions in bilayer SU(N) anti-ferromagnets

    Full text link
    We present a detailed study of the destruction of SU(N) magnetic order in square lattice bilayer anti-ferromagnets using unbiased quantum Monte Carlo numerical simulations and field theoretic techniques. We study phase transitions from an SU(N) N\'eel state into two distinct quantum disordered "valence-bond" phases: a valence-bond liquid (VBL) with no broken symmetries and a lattice-symmetry breaking valence-bond solid (VBS) state. For finite inter-layer coupling, the cancellation of Berry phases between the layers has dramatic consequences on the two phase transitions: the N\'eel-VBS transition is first order for all N5N\geq5 accesible in our model, whereas the N\'eel-VBL transition is continuous for N=2 and first order for N>= 4; for N=3 the N\'eel-VBL transition show no signs of first-order behavior

    Artificial Neural Network-based error compensation procedure for low-cost encoders

    Full text link
    An Artificial Neural Network-based error compensation method is proposed for improving the accuracy of resolver-based 16-bit encoders by compensating for their respective systematic error profiles. The error compensation procedure, for a particular encoder, involves obtaining its error profile by calibrating it on a precision rotary table, training the neural network by using a part of this data and then determining the corrected encoder angle by subtracting the ANN-predicted error from the measured value of the encoder angle. Since it is not guaranteed that all the resolvers will have exactly similar error profiles because of the inherent differences in their construction on a micro scale, the ANN has been trained on one error profile at a time and the corresponding weight file is then used only for compensating the systematic error of this particular encoder. The systematic nature of the error profile for each of the encoders has also been validated by repeated calibration of the encoders over a period of time and it was found that the error profiles of a particular encoder recorded at different epochs show near reproducible behavior. The ANN-based error compensation procedure has been implemented for 4 encoders by training the ANN with their respective error profiles and the results indicate that the accuracy of encoders can be improved by nearly an order of magnitude from quoted values of ~6 arc-min to ~0.65 arc-min when their corresponding ANN-generated weight files are used for determining the corrected encoder angle.Comment: 16 pages, 4 figures. Accepted for Publication in Measurement Science and Technology (MST

    UL 54 foscarnet mutation in an hematopoietic stem cell transplant recipient with cytomegalovirus disease

    Full text link
    We present a case of foscarnet ( FOS ) resistance arising from a UL 54 mutation after a short duration of FOS exposure, which has not been previously described in a stem cell transplant recipient, to our knowledge. We discuss the use of FOS to treat other viral infections and the implications this may have for the development of resistance mutations and treatment of cytomegalovirus disease.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/106875/1/tid12200.pd

    Quantum criticality of U(1) gauge theories with fermionic and bosonic matter in two spatial dimensions

    Get PDF
    We consider relativistic U(1) gauge theories in 2+1 dimensions, with N_b species of complex bosons and N_f species of Dirac fermions at finite temperature. The quantum phase transition between the Higgs and Coulomb phases is described by a conformal field theory (CFT). At large N_b and N_f, but for arbitrary values of the ratio N_b/N_f, we present computations of various critical exponents and universal amplitudes for these CFTs. We make contact with the different spin-liquids, charge-liquids and deconfined critical points of quantum magnets that these field theories describe. We compute physical observables that may be measured in experiments or numerical simulations of insulating and doped quantum magnets.Comment: 30 pages, 8 figure

    Tadpole Method and Supersymmetric O(N) Sigma Model

    Get PDF
    We examine the phase structures of the supersymmetric O(N) sigma model in two and three dimensions by using the tadpole method. Using this simple method, the calculation is largely simplified and the characteristics of this theory become clear. We also examine the problem of the fictitious negative energy state.Comment: Plain Latex(12pages), No figur

    Optimizing Information Freshness in Wireless Networks under General Interference Constraints

    Full text link
    Age of information (AoI) is a recently proposed metric for measuring information freshness. AoI measures the time that elapsed since the last received update was generated. We consider the problem of minimizing average and peak AoI in a wireless networks, consisting of a set of source-destination links, under general interference constraints. When fresh information is always available for transmission, we show that a stationary scheduling policy is peak age optimal. We also prove that this policy achieves average age that is within a factor of two of the optimal average age. In the case where fresh information is not always available, and packet/information generation rate has to be controlled along with scheduling links for transmission, we prove an important separation principle: the optimal scheduling policy can be designed assuming fresh information, and independently, the packet generation rate control can be done by ignoring interference. Peak and average AoI for discrete time G/Ber/1 queue is analyzed for the first time, which may be of independent interest
    corecore