283 research outputs found
An Analysis of the Relationship between Principal Employment Interview Scores and the Achievement Scores of Students with Specific Learning Disabilities
The primary purpose of this study was to examine the relationship between five of ISSLC's 2008 leadership standards as measured by a standardized employment interview (ICIS Principal) and the achievement of students with specific learning disabilities in core areas of instruction. Findings did not support the rejection of the null hypothesis. That is, a statistically significant relationship between these leadership measures and achievement levels of students with specific learning disabilities was not demonstrated. The analysis, however, did indicate that the relationship varied for students with specific learning disabilities in comparison to their grade-level peers. This latter evaluation encourages further investigation of methodological and conceptual issues that influence the relationship between principals and student achievement
Simple Ways to improve Discrete Time Evolution
Suzuki-Trotter decompositions of exponential operators like are
required in almost every branch of numerical physics. Often the exponent under
consideration has to be split into more than two operators, for instance as
local gates on quantum computers. In this work, we demonstrate how highly
optimised schemes originally derived for exactly two operators can be applied
to such generic Suzuki-Trotter decompositions. After this first trick, we
explain what makes an efficient decomposition and how to choose from the large
variety available. Furthermore we demonstrate that many problems for which a
Suzuki-Trotter decomposition might appear to be the canonical ansatz, are
better approached with different methods like Taylor or Chebyshev expansions.
In particular, we derive an efficient and numerically stable method to
implement truncated polynomial expansions based on a linear factorisation using
their complex zeros.Comment: 10 pages, 3 figures; LATTICE2023 proceedings. arXiv admin note: text
overlap with arXiv:2211.0269
The Hubbard Model on the Honeycomb Lattice with Hybrid Monte Carlo
We take advantage of recent improvements in the grand canonical Hybrid Monte Carlo (HMC) algorithm, to perform a precision study of the single-particle gap in the hexagonal Hubbard model, with on-site electron-electron interactions. After carefully controlled analyses of the Trotter error, the thermodynamic limit, and finite-size scaling with inverse temperature, we find a critical coupling of and the critical exponent for the semimetal-antiferromagnetic Mott insulator quantum phase transition in the hexagonal Hubbard Model.
Based on these results, we provide a unified, comprehensive treatment of all operators that contribute to the anti-ferromagnetic, ferromagnetic, and charge-density-wave structure factors and order parameters of the hexagonal Hubbard Model. We expect our findings to improve the consistency of Monte Carlo determinations of critical exponents. We perform a data collapse analysis and determine the critical exponent . We consider our findings in view of the Gross-Neveu, or chiral Heisenberg, universality class. We also discuss the computational scaling of the HMC algorithm. Our methods are applicable to a wide range of lattice theories of strongly correlated electrons.
The Ising model, a simple statistical model for ferromagnetism, is one such theory. There are analytic solutions for low dimensions and very efficient Monte Carlo methods, such as cluster algorithms, for simulating this model in special cases.
However most approaches do not generalise to arbitrary lattices and couplings. We present a formalism that allows one to apply HMC simulations to the Ising model, demonstrating how a system with discrete degrees of freedom can be simulated with continuous variables. Because of the flexibility of HMC, our formalism is easily generalizable to arbitrary modifications of the model, creating a route to leverage advanced algorithms such as shift preconditioners and multi-level methods, developed in conjunction with HMC.
We discuss the relation of a variety of different methods to determine energy levels in lattice field theory simulations: the generalised eigenvalue, the Prony, the generalised pencil of function and the Gardner methods. All three former methods can be understood as special cases of a generalised eigenvalue problem. We show analytically that the leading corrections to an energy in all three methods due to unresolved states decay asymptotically exponentially like . Using synthetic data we show that these corrections behave as expected also in practice. We propose a novel combination of the generalised eigenvalue and the Prony method, denoted as GEVM/PGEVM, which helps to increase the energy gap . We illustrate its usage and performance using lattice QCD examples. The Gardner method on the other hand is found less applicable to realistic noisy data
Real Time Simulations of Quantum Spin Chains: Density-of-States and Reweighting approaches
We put the Density-of-States (DoS) approach to Monte-Carlo (MC) simulations
under a stress test by applying it to a physical problem with the worst
possible sign problem: the real time evolution of a non-integrable quantum spin
chain. Benchmarks against numerical exact diagonalisation and stochastic
reweighting are presented. Both MC methods, the DoS approach and reweighting,
allow for simulations of spin chains as long as , far beyond exact
diagonalisability, though only for short evolution times . We
identify discontinuities of the density of states as one of the key problems in
the MC simulations and propose to calculate some of the dominant contributions
analytically, increasing the precision of our simulations by several orders of
magnitude. Even after these improvements the density of states is found highly
non-smooth and therefore the DoS approach cannot outperform reweighting. We
prove this implication theoretically and provide numerical evidence, concluding
that the DoS approach is not well suited for quantum real time simulations with
discrete degrees of freedom.Comment: 16 + 4 pages, 7 figures; code and data available (DOI:
10.5281/zenodo.7164902
Beer Mats make bad Frisbees
In this article we show why flying and rotating beer mats, CDs, or other flat
disks will eventually flip in the air and end up flying with backspin, thus,
making them unusable as frisbees. The crucial effect responsible for the
flipping is found to be the lift attacking not in the center of mass but
slightly offset to the forward edge. This induces a torque leading to a
precession towards backspin orientation. An effective theory is developed
providing an approximate solution for the disk's trajectory with a minimal set
of parameters. Our theoretical results are confronted with experimental results
obtained using a beer mat shooting apparatus and a high speed camera. Very good
agreement is found.Comment: 4 videos in ancillary file
- …
