292 research outputs found

    Verschraenkung versus Stosszahlansatz: Disappearance of the Thermodynamic Arrow in a High-Correlation Environment

    Full text link
    The crucial role of ambient correlations in determining thermodynamic behavior is established. A class of entangled states of two macroscopic systems is constructed such that each component is in a state of thermal equilibrium at a given temperature, and when the two are allowed to interact heat can flow from the colder to the hotter system. A dilute gas model exhibiting this behavior is presented. This reversal of the thermodynamic arrow is a consequence of the entanglement between the two systems, a condition that is opposite to molecular chaos and shown to be unlikely in a low-entropy environment. By contrast, the second law is established by proving Clausius' inequality in a low-entropy environment. These general results strongly support the expectation, first expressed by Boltzmann and subsequently elaborated by others, that the second law is an emergent phenomenon that requires a low-entropy cosmological environment, one that can effectively function as an ideal information sink.Comment: 4 pages, REVTeX

    Is Integer Arithmetic Enough for Deep Learning Training?

    Full text link
    The ever-increasing computational complexity of deep learning models makes their training and deployment difficult on various cloud and edge platforms. Replacing floating-point arithmetic with low-bit integer arithmetic is a promising approach to save energy, memory footprint, and latency of deep learning models. As such, quantization has attracted the attention of researchers in recent years. However, using integer numbers to form a fully functional integer training pipeline including forward pass, back-propagation, and stochastic gradient descent is not studied in detail. Our empirical and mathematical results reveal that integer arithmetic is enough to train deep learning models. Unlike recent proposals, instead of quantization, we directly switch the number representation of computations. Our novel training method forms a fully integer training pipeline that does not change the trajectory of the loss and accuracy compared to floating-point, nor does it need any special hyper-parameter tuning, distribution adjustment, or gradient clipping. Our experimental results show that our proposed method is effective in a wide variety of tasks such as classification (including vision transformers), object detection, and semantic segmentation

    Entropic uncertainty relation for power-law wave packets

    Full text link
    For the power-law quantum wave packet in configuration space, the variance of the position observable may be divergent. Accordingly, the information-entropic formulation of the uncertainty principle becomes more appropriate than the Heisenberg-type formulation, since it involves only the finite quantities. It is found that the total amount of entropic uncertainty converges to its lower bound in the limit of a large value of the exponent.Comment: 10 pages, 3 figure

    QED Corrections to Planck's Radiation Law and Photon Thermodynamics

    Full text link
    Leading corrections to Planck's formula and photon thermodynamics arising from the pair-mediated photon-photon interaction are calculated. This interaction is attractive and causes an increase in occupation number for all modes. Possible consequences, including the role of the cosmic photon gas in structure formation, are considered.Comment: 15 pages, Revtex 3.

    Integer Fine-tuning of Transformer-based Models

    Full text link
    Transformer based models are used to achieve state-of-the-art performance on various deep learning tasks. Since transformer-based models have large numbers of parameters, fine-tuning them on downstream tasks is computationally intensive and energy hungry. Automatic mixed-precision FP32/FP16 fine-tuning of such models has been previously used to lower the compute resource requirements. However, with the recent advances in the low-bit integer back-propagation, it is possible to further reduce the computation and memory foot-print. In this work, we explore a novel integer training method that uses integer arithmetic for both forward propagation and gradient computation of linear, convolutional, layer-norm, and embedding layers in transformer-based models. Furthermore, we study the effect of various integer bit-widths to find the minimum required bit-width for integer fine-tuning of transformer-based models. We fine-tune BERT and ViT models on popular downstream tasks using integer layers. We show that 16-bit integer models match the floating-point baseline performance. Reducing the bit-width to 10, we observe 0.5 average score drop. Finally, further reduction of the bit-width to 8 provides an average score drop of 1.7 points

    Stable ultrahigh-density magneto-optical recordings using introduced linear defects

    Full text link
    The stability of data bits in magnetic recording media at ultrahigh densities is compromised by thermal `flips' -- magnetic spin reversals -- of nano-sized spin domains, which erase the stored information. Media that are magnetized perpendicular to the plane of the film, such as ultrathin cobalt films or multilayered structures, are more stable against thermal self-erasure than conventional memory devices. In this context, magneto-optical memories seem particularly promising for ultrahigh-density recording on portable disks, and bit densities of \sim100 Gbit inch2^{-2} have been demonstrated using recent advances in the bit writing and reading techniques. But the roughness and mobility of the magnetic domain walls prevents closer packing of the magnetic bits, and therefore presents a challenge to reaching even higher bit densities. Here we report that the strain imposed by a linear defect in a magnetic thin film can smooth rough domain walls over regions hundreds of micrometers in size, and halt their motion. A scaling analysis of this process, based on the generic physics of disorder-controlled elastic lines, points to a simple way by which magnetic media might be prepared that can store data at densities in excess of 1 Tbit inch2^{-2}.Comment: 5 pages, 4 figures, see also an article in TRN News at http://www.trnmag.com/Stories/041801/Defects_boost_disc_capacity_041801.htm

    Validity of the second law in nonextensive quantum thermodynamics

    Full text link
    The second law of thermodynamics in nonextensive statistical mechanics is discussed in the quantum regime. Making use of the convexity property of the generalized relative entropy associated with the Tsallis entropy indexed by q, Clausius' inequality is shown to hold in the range of q between zero and two. This restriction on the range of the entropic index, q, is purely quantum mechanical and there exists no upper bound of q for validity of the second law in classical theory.Comment: 12 pages, no figure

    Dynamical confinement in bosonized QCD2

    Full text link
    In the bosonized version of two dimensional theories non trivial boundary conditions (topology) play a crucial role. They are inevitable if one wants to describe non singlet states. In abelian bosonization, color is the charge of a topological current in terms of a non-linear meson field. We show that confinement appears as the dynamical collapse of the topology associated with its non trivial boundary conditions.Comment: 11 pages, figures not included, ftuv/92-

    Coherent Schwinger Interaction from Darboux Transformation

    Full text link
    The exactly solvable scalar-tensor potential of the four-component Dirac equation has been obtained by the Darboux transformation method. The constructed potential has been interpreted in terms of nucleon-nucleon and Schwinger interactions of neutral particles with lattice sites during their channeling Hamiltonians of a Schwinger type is obtained by means of the Darboux transformation chain. The analitic structure of the Lyapunov function of periodic continuation for each of the Hamiltonians of the family is considered.Comment: 12 pages, Latex, six figures; six sections, one figure adde
    corecore