1,215 research outputs found

    Conference Discussion of the Nuclear Force

    Full text link
    Discussion of the nuclear force, lead by a round table consisting of T. Cohen, E. Epelbaum, R. Machleidt, and F. Gross (chair). After an invited talk by Machleidt, published elsewhere in these proceedings, brief remarks are made by Epelbaum, Cohen, and Gross, followed by discussion from the floor moderated by the chair. The chair asked the round table and the participants to focus on the following issues: (i) What does each approach (chiral effective field theory, large Nc, and relativistic phenomenology) contribute to our knowledge of the nuclear force? Do we need them all? Is any one transcendent? (ii) How important for applications (few body, nuclear structure, EMC effect, for example) are precise fits to the NN data below 350 MeV? How precise do these fits have to be? (iii) Can we learn anything about nonperturbative QCD from these studies of the nuclear force? The discussion presented here is based on a video recording made at the conference and transcribed afterward.Comment: Discussion at the 21st European Conference on Few Body Problems (EFP21) held at Salamanca, Spain, 30 Aug - 3 Sept 201

    Excited Baryon Decay Widths in Large N_c QCD

    Full text link
    We study excited baryon decay widths in large N_c QCD. It was suggested previously that some spin-flavor mixed-symmetric baryon states have strong couplings of O(N_c^{-1/2}) to nucleons [implying narrow widths of O(1/N_c)], as opposed to the generic expectation based on Witten's counting rules of an O(N_c^0) coupling. The calculation obtaining these narrow widths was performed in the context of a simple quark-shell model. This paper addresses the question of whether the existence of such narrow states is a general property of large N_c QCD. We show that a general large N_c QCD analysis does not predict such narrow states; rather they are a consequence of the extreme simplicity of the quark model.Comment: 9 page

    Nucleon-Nucleon Scattering under Spin-Isospin Reversal in Large-N_c QCD

    Full text link
    The spin-flavor structure of certain nucleon-nucleon scattering observables derived from the large N_c limit of QCD in the kinematical regime where time-dependent mean-field theory is valid is discussed. In previous work, this regime was taken to be where the external momentum was of order N_c which precluded the study of differential cross sections in elastic scattering. Here it is shown that the regime extends down to order N_c^{1/2} which includes the higher end of the elastic regime. The prediction is that in the large N_c limit, observables describable via mean-field theory are unchanged when the spin and isospin of either nucleon are both flipped. This prediction is tested for proton-proton and neutron-proton elastic scattering data and found to fail badly. We argue that this failure can be traced to a lack of a clear separation of scales between momentum of order N_c^{1/2} and N_c^1 when N_c is as small as three. The situation is compounded by an anomalously low particle production threshold due to approximate chiral symmetry.Comment: 5 pages, 1 figur

    The Kernel Adatron with Bias Unit: Analysis of the Algorithm (Part 2)

    Get PDF
    The Kernel Adatron with bias allows to find a large margin classifier in the kernel feature space. It is an adaptive perceptron (Adatron) which uses augmented patterns in the kernel feature space. The algorithm has been proposed firstly in [9]. In this document the convergence of properties are investigated. As a by-product of the current analysis a simpler formulation of the algorithm is derived

    Perceptrons in Kernel Feature Spaces

    Get PDF
    The weight vector of a perceptron can be represented in two ways, either in an explicit form where the vector is directly available, or in a data dependant form where the weight is represented by a weighted sum of some training patterns. Kernel functions allow the creation of nonlinear versions of data dependent perceptrons if scalar products are replaced by kernel functions. For Muroga's and Minnick's linear programming perceptron, a data dependent version with kernels and regularisation is presented; the linear programming machine which perform about as well as support vector machines do by only solving LINEAR programs (support vector learning is based on solving QUADRATIC programs). In the decision function of a kernel-based perceptron, nonlinear dependencies between the expansion vectors can exist. These dependencies in kernel feature space can be eliminated in order to compress the decision function without loss by removing redundant expansion vectors updating multipliers. The compression ratio obtained can be considered as a complexity measure similar to, but tighter than, Vapnick's leave-one-out bound

    Mathematical Programming Potential Functions: New Algorithms for Nonlinear Function Approximation

    Get PDF
    Linear and quadratic-programming perceptrons for regression are new potential function methods for nonlinear function approximation. Potential function perceptrons, which have been proposed by Aizerman and colleagues in the early 1960's work in the following way: in the first stage patterns from the training set are mapped into a very high dimensional linearisation space by performing a high dimensional non-linear expansion of training vectors into the so-called linearisation space. In this space the perceptron's design function is determined. In the algorithms proposed in this work a non-linear prediction function is constructed using linear-or quadratic-programming routines to optimize the convex cost function. In Linear Programming Machines the L1 loss function is minimised, while Quadratic Programming Machines allow the minimisation of the L2 cost function, or a mixture of both the L1 and L2 noise models. Regularisation is implicitly performed by the expansion into linearisation space by choosing a suitable kernel function), additionally weight decay regularisation is available. First experimental results for one-dimensional curve-fitting using linear programming machines demonstrate the performance of the new method
    • …
    corecore