969 research outputs found

    Glauber - Gribov approach for DIS on nuclei in N=4 SYM

    Full text link
    In this paper the Glauber-Gribov approach for deep-inelastic scattering (DIS) with nuclei is developed in N=4 SYM. It is shown that the amplitude displays the same general properties, such as geometrical scaling, as is the case in the high density QCD approach. We found that the quantum effects leading to the graviton reggeization, give rise to an imaginary part of the nucleon amplitude, which makes the DIS in N=4 SYM almost identical to the one expected in high density QCD. We concluded that the impact parameter dependence of the nucleon amplitude is very essential for N=4 SYM, and the entire kinematic region can be divided into three regions which are discussed in the paper. We revisited the dipole description for DIS and proposed a new renormalized Lagrangian for the shock wave formalism which reproduces the Glauber-Gribov approach in a certain kinematic region. However the saturation momentum turns out to be independent of energy, as it has been discussed by Albacete, Kovchegov and Taliotis. We discuss the physical meaning of such a saturation momentum Qs(A)Q_s(A) and argue that one can consider only Q>Qs(A)Q>Q_s(A) within the shock wave approximation.Comment: 40pp.,9 figures in eps file

    Dark Energy Content of Nonlinear Electromagnetism

    Full text link
    Quasi-constant external fields in nonlinear electromagnetism generate a contribution to the energy-momentum tensor with the form of dark energy. To provide a thorough understanding of the origin and strength of the effects, we undertake a complete theoretical and numerical study of the energy-momentum tensor TμνT^{\mu\nu} for nonlinear electromagnetism. The Euler-Heisenberg nonlinearity due to quantum fluctuations of spinor and scalar matter fields is considered and contrasted with the properties of classical nonlinear Born-Infeld electromagnetism. We also address modifications of charged particle kinematics by strong background fields.Comment: 16 pages, 12 figures; reorganized introduction and sections 4 and 5, added further numerical results and discussion, updated references, fixed typo

    Reconstruction of Bandlimited Functions from Unsigned Samples

    Full text link
    We consider the recovery of real-valued bandlimited functions from the absolute values of their samples, possibly spaced nonuniformly. We show that such a reconstruction is always possible if the function is sampled at more than twice its Nyquist rate, and may not necessarily be possible if the samples are taken at less than twice the Nyquist rate. In the case of uniform samples, we also describe an FFT-based algorithm to perform the reconstruction. We prove that it converges exponentially rapidly in the number of samples used and examine its numerical behavior on some test cases

    Towards heuristic algorithmic memory

    Get PDF
    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We introduce four synergistic update algorithms that use a Stochastic Context-Free Grammar as a guiding probability distribution of programs. The update algorithms accomplish adjusting production probabilities, re-using previous solutions, learning programming idioms and discovery of frequent subprograms. A controlled experiment with a long training sequence shows that our incremental learning approach is effective. © 2011 Springer-Verlag Berlin Heidelberg

    Origin of Intrinsic Josephson Coupling in the Cuprates and Its Relation to Order Parameter Symmetry: An Incoherent Hopping Model

    Full text link
    Experiments on the cuprate superconductors demonstrate that these materials may be viewed as a stack of Josephson junctions along the c-direction. In this paper, we present a model which describes this intrinsic Josephson coupling in terms of incoherent quasiparticle hopping along the c-axis arising from wave-function overlap, impurity-assisted hopping, and boson-assisted hopping. We use this model to compute the magnitude and temperature T dependence of the resulting Josephson critical current j_c (T) for s- and d-wave superconductors. Contrary to other approaches, d-wave pairing in this model is compatible with an intrinsic Josephson effect at all hole concentrations and leads to j_c (T) \propto T at low T. By parameterizing our theory with c-axis resistivity data from YBCO, we estimate j_c (T) for optimally doped and underdoped members of this family. Our estimates suggest that further experiments on this compound would be of great help in elucidating the validity of our model in general and the pairing symmetry in particular. We also discuss the implications of our model for LSCO and BSCCO.Comment: 28 pages, REVTEX, 5 compressed PostScript figures. Substantially expanded and revised from the earlier version. To appear in Physica

    Leading strategies in competitive on-line prediction

    Get PDF
    We start from a simple asymptotic result for the problem of on-line regression with the quadratic loss function: the class of continuous limited-memory prediction strategies admits a "leading prediction strategy", which not only asymptotically performs at least as well as any continuous limited-memory strategy but also satisfies the property that the excess loss of any continuous limited-memory strategy is determined by how closely it imitates the leading strategy. More specifically, for any class of prediction strategies constituting a reproducing kernel Hilbert space we construct a leading strategy, in the sense that the loss of any prediction strategy whose norm is not too large is determined by how closely it imitates the leading strategy. This result is extended to the loss functions given by Bregman divergences and by strictly proper scoring rules.Comment: 20 pages; a conference version is to appear in the ALT'2006 proceeding

    Cutoff for the Ising model on the lattice

    Full text link
    Introduced in 1963, Glauber dynamics is one of the most practiced and extensively studied methods for sampling the Ising model on lattices. It is well known that at high temperatures, the time it takes this chain to mix in L1L^1 on a system of size nn is O(logn)O(\log n). Whether in this regime there is cutoff, i.e. a sharp transition in the L1L^1-convergence to equilibrium, is a fundamental open problem: If so, as conjectured by Peres, it would imply that mixing occurs abruptly at (c+o(1))logn(c+o(1))\log n for some fixed c>0c>0, thus providing a rigorous stopping rule for this MCMC sampler. However, obtaining the precise asymptotics of the mixing and proving cutoff can be extremely challenging even for fairly simple Markov chains. Already for the one-dimensional Ising model, showing cutoff is a longstanding open problem. We settle the above by establishing cutoff and its location at the high temperature regime of the Ising model on the lattice with periodic boundary conditions. Our results hold for any dimension and at any temperature where there is strong spatial mixing: For Z2\Z^2 this carries all the way to the critical temperature. Specifically, for fixed d1d\geq 1, the continuous-time Glauber dynamics for the Ising model on (Z/nZ)d(\Z/n\Z)^d with periodic boundary conditions has cutoff at (d/2λ)logn(d/2\lambda_\infty)\log n, where λ\lambda_\infty is the spectral gap of the dynamics on the infinite-volume lattice. To our knowledge, this is the first time where cutoff is shown for a Markov chain where even understanding its stationary distribution is limited. The proof hinges on a new technique for translating L1L^1 to L2L^2 mixing which enables the application of log-Sobolev inequalities. The technique is general and carries to other monotone and anti-monotone spin-systems.Comment: 34 pages, 3 figure

    Meshfree finite differences for vector Poisson and pressure Poisson equations with electric boundary conditions

    Full text link
    We demonstrate how meshfree finite difference methods can be applied to solve vector Poisson problems with electric boundary conditions. In these, the tangential velocity and the incompressibility of the vector field are prescribed at the boundary. Even on irregular domains with only convex corners, canonical nodal-based finite elements may converge to the wrong solution due to a version of the Babuska paradox. In turn, straightforward meshfree finite differences converge to the true solution, and even high-order accuracy can be achieved in a simple fashion. The methodology is then extended to a specific pressure Poisson equation reformulation of the Navier-Stokes equations that possesses the same type of boundary conditions. The resulting numerical approach is second order accurate and allows for a simple switching between an explicit and implicit treatment of the viscosity terms.Comment: 19 pages, 7 figure

    Evaluating a reinforcement learning algorithm with a general intelligence test

    Full text link
    In this paper we apply the recent notion of anytime universal intelligence tests to the evaluation of a popular reinforcement learning algorithm, Q-learning. We show that a general approach to intelligence evaluation of AI algorithms is feasible. This top-down (theory-derived) approach is based on a generation of environments under a Solomonoff universal distribution instead of using a pre-defined set of specific tasks, such as mazes, problem repositories, etc. This first application of a general intelligence test to a reinforcement learning algorithm brings us to the issue of task-specific vs. general AI agents. This, in turn, suggests new avenues for AI agent evaluation and AI competitions, and also conveys some further insights about the performance of specific algorithms. © 2011 Springer-Verlag.We are grateful for the funding from the Spanish MEC and MICINN for projects TIN2009-06078-E/TIN, Consolider-Ingenio CSD2007-00022 and TIN2010-21062-C02, for MEC FPU grant AP2006-02323, and Generalitat Valenciana for Prometeo/2008/051.Insa Cabrera, J.; Dowe, DL.; Hernández Orallo, J. (2011). Evaluating a reinforcement learning algorithm with a general intelligence test. En Advances in Artificial Intelligence. Springer Verlag (Germany). 7023:1-11. https://doi.org/10.1007/978-3-642-25274-7_1S1117023Dowe, D.L., Hajek, A.R.: A non-behavioural, computational extension to the Turing Test. In: Intl. Conf. on Computational Intelligence & multimedia applications (ICCIMA 1998), Gippsland, Australia, pp. 101–106 (1998)Genesereth, M., Love, N., Pell, B.: General game playing: Overview of the AAAI competition. AI Magazine 26(2), 62 (2005)Hernández-Orallo, J.: Beyond the Turing Test. J. Logic, Language & Information 9(4), 447–466 (2000)Hernández-Orallo, J.: A (hopefully) non-biased universal environment class for measuring intelligence of biological and artificial systems. In: Hutter, M., et al. (eds.) 3rd Intl. Conf. on Artificial General Intelligence, Atlantis, pp. 182–183 (2010)Hernández-Orallo, J.: On evaluating agent performance in a fixed period of time. In: Hutter, M., et al. (eds.) 3rd Intl. Conf. on Artificial General Intelligence, pp. 25–30. Atlantis Press (2010)Hernández-Orallo, J., Dowe, D.L.: Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence 174(18), 1508–1539 (2010)Legg, S., Hutter, M.: A universal measure of intelligence for artificial agents. Intl. Joint Conf. on Artificial Intelligence, IJCAI 19, 1509 (2005)Legg, S., Hutter, M.: Universal intelligence: A definition of machine intelligence. Minds and Machines 17(4), 391–444 (2007)Levin, L.A.: Universal sequential search problems. Problems of Information Transmission 9(3), 265–266 (1973)Li, M., Vitányi, P.: An introduction to Kolmogorov complexity and its applications, 3rd edn. Springer-Verlag New York, Inc. (2008)Sanghi, P., Dowe, D.L.: A computer program capable of passing IQ tests. In: Proc. 4th ICCS International Conference on Cognitive Science (ICCS 2003), Sydney, Australia, pp. 570–575 (2003)Solomonoff, R.J.: A formal theory of inductive inference. Part I. Information and Control 7(1), 1–22 (1964)Strehl, A.L., Li, L., Wiewiora, E., Langford, J., Littman, M.L.: PAC model-free reinforcement learning. In: Proc. of the 23rd Intl. Conf. on Machine Learning, ICML 2006, New York, pp. 881–888 (2006)Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. The MIT press (1998)Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)Veness, J., Ng, K.S., Hutter, M., Silver, D.: Reinforcement learning via AIXI approximation. In: Proc. 24th Conf. on Artificial Intelligence (AAAI 2010), pp. 605–611 (2010)Watkins, C.J.C.H., Dayan, P.: Q-learning. Machine learning 8(3), 279–292 (1992)Weyns, D., Parunak, H.V.D., Michel, F., Holvoet, T., Ferber, J.: Environments for multiagent systems state-of-the-art and research challenges. In: Weyns, D., Van Dyke Parunak, H., Michel, F. (eds.) E4MAS 2004. LNCS (LNAI), vol. 3374, pp. 1–47. Springer, Heidelberg (2005)Whiteson, S., Tanner, B., White, A.: The Reinforcement Learning Competitions. The AI magazine 31(2), 81–94 (2010)Woergoetter, F., Porr, B.: Reinforcement learning. Scholarpedia 3(3), 1448 (2008)Zatuchna, Z., Bagnall, A.: Learning mazes with aliasing states: An LCS algorithm with associative perception. Adaptive Behavior 17(1), 28–57 (2009
    corecore