14,779 research outputs found

    Existence of nodal solutions for Dirac equations with singular nonlinearities

    Full text link
    We prove, by a shooting method, the existence of infinitely many solutions of the form ψ(x0,x)=e−iΩx0χ(x)\psi(x^0,x) = e^{-i\Omega x^0}\chi(x) of the nonlinear Dirac equation {equation*} i\underset{\mu=0}{\overset{3}{\sum}} \gamma^\mu \partial_\mu \psi- m\psi - F(\bar{\psi}\psi)\psi = 0 {equation*} where Ω>m>0,\Omega>m>0, χ\chi is compactly supported and \[F(x) = \{{array}{ll} p|x|^{p-1} & \text{if} |x|>0 0 & \text{if} x=0 {array}.] with p∈(0,1),p\in(0,1), under some restrictions on the parameters pp and Ω.\Omega. We study also the behavior of the solutions as pp tends to zero to establish the link between these equations and the M.I.T. bag model ones

    Quantum chains with a Catalan tree pattern of conserved charges: the Δ=−1\Delta = -1 XXZ model and the isotropic octonionic chain

    Get PDF
    A class of quantum chains possessing a family of local conserved charges with a Catalan tree pattern is studied. Recently, we have identified such a structure in the integrable SU(N)SU(N)-invariant chains. In the present work we find sufficient conditions for the existence of a family of charges with this structure in terms of the underlying algebra. Two additional systems with a Catalan tree structure of conserved charges are found. One is the spin 1/2 XXZ model with Δ=−1\Delta=-1. The other is a new octonionic isotropic chain, generalizing the Heisenberg model. This system provides an interesting example of an infinite family of noncommuting local conserved quantities.Comment: 20 pages in plain TeX; uses macro harvma

    Risk Taking of Executives under Different Incentive Contracts: Experimental Evidence

    Get PDF
    Classic financial agency theory recommends compensation through stock options rather than shares to induce risk neutrality in otherwise risk averse agents. In an experiment, we find that subjects acting as executives do also take risks that are excessive from the perspective of shareholders if compensated through options. Compensation through restricted company stock reduces the uptake of excessive risks. Even under stock-ownership, however, experimental executives continue to take excessive risks—a result that cannot be accounted for by classic incentive theory. We develop a basic model in which such risk-taking behavior is explained based on a richer array of risk attitudes derived from Prospect Theory. We use the model to derive hypotheses on what may be driving excessive risk taking in the experiment. Testing those hypotheses, we find that most of them are indeed borne out by the data. We thus conclude that a prospect-theory-based model is more apt at explaining risk attitudes under different compensation regimes than traditional principal-agent models grounded in expected utility theory

    Structure of the conservation laws in integrable spin chains with short range interactions

    Get PDF
    We present a detailed analysis of the structure of the conservation laws in quantum integrable chains of the XYZ-type and in the Hubbard model. With the use of the boost operator, we establish the general form of the XYZ conserved charges in terms of simple polynomials in spin variables and derive recursion relations for the relative coefficients of these polynomials. For two submodels of the XYZ chain - namely the XXX and XY cases, all the charges can be calculated in closed form. For the XXX case, a simple description of conserved charges is found in terms of a Catalan tree. This construction is generalized for the su(M) invariant integrable chain. We also indicate that a quantum recursive (ladder) operator can be traced back to the presence of a hamiltonian mastersymmetry of degree one in the classical continuous version of the model. We show that in the quantum continuous limits of the XYZ model, the ladder property of the boost operator disappears. For the Hubbard model we demonstrate the non-existence of a ladder operator. Nevertheless, the general structure of the conserved charges is indicated, and the expression for the terms linear in the model's free parameter for all charges is derived in closed form.Comment: 79 pages in plain TeX plus 4 uuencoded figures; (uses harvmac and epsf

    R-adaptive multisymplectic and variational integrators

    Get PDF
    Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this paper we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. Numerical results for the Sine-Gordon equation are also presented.Comment: 65 pages, 13 figure

    Compression-aware Training of Deep Networks

    Get PDF
    In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.Comment: Accepted at NIPS 201
    • …
    corecore