361,838 research outputs found

    The Approach to the Thermodynamic Limit in Lattice QCD at \mu\neq0

    Get PDF
    The expectation value of the complex phase factor of the fermion determinant is computed to leading order in the pp-expansion of the chiral Lagrangian. The computation is valid for ÎŒ<mπ/2\mu<m_\pi/2 and determines the dependence of the sign problem on the volume and on the geometric shape of the volume. In the thermodynamic limit with Li→∞ L_i \to \infty at fixed temperature 1/L01/L_0, the average phase factor vanishes. In the low temperature limit where Li/L0L_i/L_0 is fixed as LiL_i becomes large the average phase factor approaches one. The results for a finite volume compare well with lattice results obtained by Allton {\it et al}.. After taking appropriate limits, we reproduce previously derived results for the Ï”\epsilon-regime and for 1-dimensional QCD. The distribution of the phase itself is also computed.Comment: 9 pages, 5 figure

    On the Efficient Computation of Large Scale Singular Sums with Applications to Long-Range Forces in Crystal Lattices

    Get PDF
    We develop a new expansion for representing singular sums in terms of integrals and vice versa. This method provides a powerful tool for the efficient computation of large singular sums that appear in long-range interacting systems in condensed matter and quantum physics. It also offers a generalised trapezoidal rule for the precise computation of singular integrals. In both cases, the difference between sum and integral is approximated by derivatives of the non-singular factor of the summand function, where the coefficients in turn depend on the singularity. We show that for a physically meaningful set of functions, the error decays exponentially with the expansion order. For a fixed expansion order, the error decays alge braically both with the grid size, if the method is used for quadrature, or the characteristic length scale of the summand function in case the sum over a fixed grid is approximated by an integral. In absence of a singularity, the method reduces to the Euler–Maclaurin summation formula. We demonstrate the numerical performance of our new expansion by applying it to the computation of the full nonlinear long-range forces inside a domain wall in a macro scopic one-dimensional crystal with 2 × 1010 particles. The code of our implementation in Mathematica is provided online. For particles that interact via the Coulomb repulsion, we demonstrate that finite size effects remain relevant even in the thermodynamic limit of macro scopic particle numbers. Our results show that widely-used continuum limits in condensed matter physics are not applicable for quantitative predictions in this case

    A Model for Calculation of a Productivity Bonus

    Get PDF
    A productivity bonus represents a defrayal that some working places extents to their workers, in order to gratificate them due to the accomplishment of some productive goals or objectives. It depends upon any organization the establishment of some method to calculate this payment and to define on which variables the calculation would be based upon. The objective of this document concerns the raising of a mathematical model that could be deployed towards the computation of this quantity. A mathematical analysis is carried out on top of the basic structure of a given fixed set of an enterprise process performance metrics, or key performance indicators (KPIs). The model takes as inputs the goals and control limits (parameter values of the metrics that are commonly found on many organizations) (Duke, 2013), their value accomplishment result, and a free parameter. A typical real life example is exposed, as a case of study, where it is applied this calculation scheme; as the result, a productivity bonus was successfully calculated, so concluding that this result can be a useful device to carry out this task

    Efficient Optimization of Loops and Limits with Randomized Telescoping Sums

    Full text link
    We consider optimization problems in which the objective requires an inner loop with many steps or is the limit of a sequence of increasingly costly approximations. Meta-learning, training recurrent neural networks, and optimization of the solutions to differential equations are all examples of optimization problems with this character. In such problems, it can be expensive to compute the objective function value and its gradient, but truncating the loop or using less accurate approximations can induce biases that damage the overall solution. We propose randomized telescope (RT) gradient estimators, which represent the objective as the sum of a telescoping series and sample linear combinations of terms to provide cheap unbiased gradient estimates. We identify conditions under which RT estimators achieve optimization convergence rates independent of the length of the loop or the required accuracy of the approximation. We also derive a method for tuning RT estimators online to maximize a lower bound on the expected decrease in loss per unit of computation. We evaluate our adaptive RT estimators on a range of applications including meta-optimization of learning rates, variational inference of ODE parameters, and training an LSTM to model long sequences

    Physical Limits of Heat-Bath Algorithmic Cooling

    Get PDF
    Simultaneous near-certain preparation of qubits (quantum bits) in their ground states is a key hurdle in quantum computing proposals as varied as liquid-state NMR and ion traps. “Closed-system” cooling mechanisms are of limited applicability due to the need for a continual supply of ancillas for fault tolerance and to the high initial temperatures of some systems. “Open-system” mechanisms are therefore required. We describe a new, efficient initialization procedure for such open systems. With this procedure, an nn-qubit device that is originally maximally mixed, but is in contact with a heat bath of bias Δ≫2−n\varepsilon \gg 2^{-n}, can be almost perfectly initialized. This performance is optimal due to a newly discovered threshold effect: For bias Δâ‰Ș2−n\varepsilon \ll 2^{-n} no cooling procedure can, even in principle (running indefinitely without any decoherence), significantly initialize even a single qubit

    Acceleration of ListNet for ranking using reconfigurable architecture

    Get PDF
    Document ranking is used to order query results by relevance with ranking models. ListNet is a well-known ranking approach for constructing and training learning-to-rank models. Compared with traditional learning approaches, ListNet delivers better accuracy, but is computationally too expensive to learn models with large data sets due to the large number of permutations and documents involved in computing the gradients. Currently, the long training time limits the practicality of ListNet in ranking applications such as breaking news search and stock prediction, and this situation is getting worse with the increase in data-set size. In order to tackle the challenge of long training time, this thesis optimises the ListNet algorithm, and designs hardware accelerators for learning the ListNet algorithm using Field Programmable Gate Arrays (FPGAs), making the algorithm more practical for real-world application. The contributions of this thesis include: 1) A novel computation method of the ListNet algorithm for ranking. The proposed computation method exposes more fine-grained parallelism for FPGA implementation. 2) A weighted sampling method that takes into account the ranking positions, along with an effective quantisation method based on FPGA devices. The proposed design achieves a 4.42x improvement over GPU implementation speed, while still guaranteeing the accuracy. 3) A full reconfigurable architecture for the ListNet training using multiple bitstream kernels. The proposed method achieves a higher model accuracy than pure fixed point training, and a better throughput than pure floating point training. This thesis has resulted in the acceleration of the ListNet algorithm for ranking using FPGAs by applying the above techniques. Significant improvements in speed have been achieved in this work against CPU and GPU implementations.Open Acces
    • 

    corecore