5,897 research outputs found

    Lunar particle shadows and boundary layer experiment: Plasma and energetic particles on the Apollo 15 and 16 subsatellites

    Get PDF
    The lunar particle shadows and boundary layer experiments aboard the Apollo 15 and 16 subsatellites and scientific reduction and analysis of the data to date are discussed with emphasis on four major topics: solar particles; interplanetry particle phenomena; lunar interactions; and topology and dynamics of the magnetosphere at lunar orbit. The studies of solar and interplanetary particles concentrated on the low energy region which was essentially unexplored, and the studies of lunar interaction pointed up the transition from single particle to plasma characteristics. The analysis concentrated on the electron angular distributions as highly sensitive indicators of localized magnetization of the lunar surface. Magnetosphere experiments provided the first electric field measurements in the distant magnetotail, as well as comprehensive low energy particle measurements at lunar distance

    Painlev\'e Transcendent Describes Quantum Correlation Function of the XXZ Antiferromagnet away from the free-fermion point

    Full text link
    We consider quantum correlation functions of the antiferromagnetic spin-12\frac{1}{2} Heisenberg XXZ spin chain in a magnetic field. We show that for a magnetic field close to the critical field hch_c (for the critical magnetic field the ground state is ferromagnetic) certain correlation functions can be expressed in terms of the solution of the Painlev\'e V transcendent. This establishes a relation between solutions of Painlev\'e differential equations and quantum correlation functions in models of {\sl interacting} fermions. Painlev\'e transcendents were known to describe correlation functions in models with free fermionic spectra.Comment: 10 pages, LaTeX2

    Localization transitions in non-Hermitian quantum mechanics

    Full text link
    We study the localization transitions which arise in both one and two dimensions when quantum mechanical particles described by a random Schr\"odinger equation are subjected to a constant imaginary vector potential. A path-integral formulation relates the transition to flux lines depinned from columnar defects by a transverse magnetic field in superconductors. The theory predicts that the transverse Meissner effect is accompanied by stretched exponential relaxation of the field into the bulk and a diverging penetration depth at the transition.Comment: 4 pages (latex) with 3 figures (epsf) embedded in the text using the style file epsf.st

    Bayes in the age of intelligent machines

    Full text link
    The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference. We argue that this is not the case, and that in fact these systems offer new opportunities for Bayesian modeling. Specifically, we argue that Bayesian models of cognition and artificial neural networks lie at different levels of analysis and are complementary modeling approaches, together offering a way to understand human cognition that spans these levels. We also argue that the same perspective can be applied to intelligent machines, where a Bayesian approach may be uniquely valuable in understanding the behavior of large, opaque artificial neural networks that are trained on proprietary data

    Entropy and Correlation Functions of a Driven Quantum Spin Chain

    Full text link
    We present an exact solution for a quantum spin chain driven through its critical points. Our approach is based on a many-body generalization of the Landau-Zener transition theory, applied to fermionized spin Hamiltonian. The resulting nonequilibrium state of the system, while being a pure quantum state, has local properties of a mixed state characterized by finite entropy density associated with Kibble-Zurek defects. The entropy, as well as the finite spin correlation length, are functions of the rate of sweep through the critical point. We analyze the anisotropic XY spin 1/2 model evolved with a full many-body evolution operator. With the help of Toeplitz determinants calculus, we obtain an exact form of correlation functions. The properties of the evolved system undergo an abrupt change at a certain critical sweep rate, signaling formation of ordered domains. We link this phenomenon to the behavior of complex singularities of the Toeplitz generating function.Comment: 16 pgs, 7 fg

    Analyticity and Integrabiity in the Chiral Potts Model

    Full text link
    We study the perturbation theory for the general non-integrable chiral Potts model depending on two chiral angles and a strength parameter and show how the analyticity of the ground state energy and correlation functions dramatically increases when the angles and the strength parameter satisfy the integrability condition. We further specialize to the superintegrable case and verify that a sum rule is obeyed.Comment: 31 pages in harvmac including 9 tables, several misprints eliminate

    Universal linguistic inductive biases via meta-learning

    Get PDF
    How do learners acquire languages from the limited data available to them? This process must involve some inductive biases - factors that affect how a learner generalizes - but it is unclear which inductive biases can explain observed patterns in language acquisition. To facilitate computational modeling aimed at addressing this question, we introduce a framework for giving particular linguistic inductive biases to a neural network model; such a model can then be used to empirically explore the effects of those inductive biases. This framework disentangles universal inductive biases, which are encoded in the initial values of a neural network's parameters, from non-universal factors, which the neural network must learn from data in a given language. The initial state that encodes the inductive biases is found with meta-learning, a technique through which a model discovers how to acquire new languages more easily via exposure to many possible languages. By controlling the properties of the languages that are used during meta-learning, we can control the inductive biases that meta-learning imparts. We demonstrate this framework with a case study based on syllable structure. First, we specify the inductive biases that we intend to give our model, and then we translate those inductive biases into a space of languages from which a model can meta-learn. Finally, using existing analysis techniques, we verify that our approach has imparted the linguistic inductive biases that it was intended to impart.Comment: To appear in the Proceedings of the 42nd Annual Conference of the Cognitive Science Societ

    Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve

    Full text link
    The widespread adoption of large language models (LLMs) makes it important to recognize their strengths and limitations. We argue that in order to develop a holistic understanding of these systems we need to consider the problem that they were trained to solve: next-word prediction over Internet text. By recognizing the pressures that this task exerts we can make predictions about the strategies that LLMs will adopt, allowing us to reason about when they will succeed or fail. This approach - which we call the teleological approach - leads us to identify three factors that we hypothesize will influence LLM accuracy: the probability of the task to be performed, the probability of the target output, and the probability of the provided input. We predict that LLMs will achieve higher accuracy when these probabilities are high than when they are low - even in deterministic settings where probability should not matter. To test our predictions, we evaluate two LLMs (GPT-3.5 and GPT-4) on eleven tasks, and we find robust evidence that LLMs are influenced by probability in the ways that we have hypothesized. In many cases, the experiments reveal surprising failure modes. For instance, GPT-4's accuracy at decoding a simple cipher is 51% when the output is a high-probability word sequence but only 13% when it is low-probability. These results show that AI practitioners should be careful about using LLMs in low-probability situations. More broadly, we conclude that we should not evaluate LLMs as if they are humans but should instead treat them as a distinct type of system - one that has been shaped by its own particular set of pressures.Comment: 50 pages plus 11 page of references and 23 pages of appendice
    • …
    corecore