1,477 research outputs found

    Conference Discussion of the Nuclear Force

    Full text link
    Discussion of the nuclear force, lead by a round table consisting of T. Cohen, E. Epelbaum, R. Machleidt, and F. Gross (chair). After an invited talk by Machleidt, published elsewhere in these proceedings, brief remarks are made by Epelbaum, Cohen, and Gross, followed by discussion from the floor moderated by the chair. The chair asked the round table and the participants to focus on the following issues: (i) What does each approach (chiral effective field theory, large Nc, and relativistic phenomenology) contribute to our knowledge of the nuclear force? Do we need them all? Is any one transcendent? (ii) How important for applications (few body, nuclear structure, EMC effect, for example) are precise fits to the NN data below 350 MeV? How precise do these fits have to be? (iii) Can we learn anything about nonperturbative QCD from these studies of the nuclear force? The discussion presented here is based on a video recording made at the conference and transcribed afterward.Comment: Discussion at the 21st European Conference on Few Body Problems (EFP21) held at Salamanca, Spain, 30 Aug - 3 Sept 201

    Excited Baryon Decay Widths in Large N_c QCD

    Full text link
    We study excited baryon decay widths in large N_c QCD. It was suggested previously that some spin-flavor mixed-symmetric baryon states have strong couplings of O(N_c^{-1/2}) to nucleons [implying narrow widths of O(1/N_c)], as opposed to the generic expectation based on Witten's counting rules of an O(N_c^0) coupling. The calculation obtaining these narrow widths was performed in the context of a simple quark-shell model. This paper addresses the question of whether the existence of such narrow states is a general property of large N_c QCD. We show that a general large N_c QCD analysis does not predict such narrow states; rather they are a consequence of the extreme simplicity of the quark model.Comment: 9 page

    Nucleon-Nucleon Scattering under Spin-Isospin Reversal in Large-N_c QCD

    Full text link
    The spin-flavor structure of certain nucleon-nucleon scattering observables derived from the large N_c limit of QCD in the kinematical regime where time-dependent mean-field theory is valid is discussed. In previous work, this regime was taken to be where the external momentum was of order N_c which precluded the study of differential cross sections in elastic scattering. Here it is shown that the regime extends down to order N_c^{1/2} which includes the higher end of the elastic regime. The prediction is that in the large N_c limit, observables describable via mean-field theory are unchanged when the spin and isospin of either nucleon are both flipped. This prediction is tested for proton-proton and neutron-proton elastic scattering data and found to fail badly. We argue that this failure can be traced to a lack of a clear separation of scales between momentum of order N_c^{1/2} and N_c^1 when N_c is as small as three. The situation is compounded by an anomalously low particle production threshold due to approximate chiral symmetry.Comment: 5 pages, 1 figur

    Sliding susceptibility of a rough cylinder on a rough inclined perturbed surface

    Full text link
    A susceptibility function χ(L){\chi}(L) is introduced to quantify some aspects of the intermittent stick-slip dynamics of a rough metallic cylinder of length LL on a rough metallic incline submitted to small controlled perturbations and maintained below the angle of repose. This problem is studied from the experimental point of view and the observed power-law behavior of χ(L){\chi}(L) is justified through the use of a general class of scaling hypotheses.Comment: 14 pages including 5 figure

    The Kernel Adatron with Bias Unit: Analysis of the Algorithm (Part 2)

    Get PDF
    The Kernel Adatron with bias allows to find a large margin classifier in the kernel feature space. It is an adaptive perceptron (Adatron) which uses augmented patterns in the kernel feature space. The algorithm has been proposed firstly in [9]. In this document the convergence of properties are investigated. As a by-product of the current analysis a simpler formulation of the algorithm is derived

    Perceptrons in Kernel Feature Spaces

    Get PDF
    The weight vector of a perceptron can be represented in two ways, either in an explicit form where the vector is directly available, or in a data dependant form where the weight is represented by a weighted sum of some training patterns. Kernel functions allow the creation of nonlinear versions of data dependent perceptrons if scalar products are replaced by kernel functions. For Muroga's and Minnick's linear programming perceptron, a data dependent version with kernels and regularisation is presented; the linear programming machine which perform about as well as support vector machines do by only solving LINEAR programs (support vector learning is based on solving QUADRATIC programs). In the decision function of a kernel-based perceptron, nonlinear dependencies between the expansion vectors can exist. These dependencies in kernel feature space can be eliminated in order to compress the decision function without loss by removing redundant expansion vectors updating multipliers. The compression ratio obtained can be considered as a complexity measure similar to, but tighter than, Vapnick's leave-one-out bound

    The Kernel Adaline: A New Algorithm for Non-Linear Signal Processing and Regression

    Get PDF
    A certain class of non-linear algorithm for signal processing and machine learning is based on the same intrinsic principle. Some given samples (training points) are in the first stage mapped into a very high-dimensional LINEARISATION space (the feature space of pattern recognition theory) and then a linear algorithm performs its work in this space. The expensive expansion into the linearisation space can be performed efficiently using Mercer's kernel functions, as studied by Aizerman et al. in the 1960's. In this work a non-linear adaptation of the (so far linear) Adaline algorithm by Widrow and Hoff is proposed. The new algorithm combines the conceptual simplicity of a least mean square algorithm for linear regression but exhibits the power of a universal non-linear function to approximator. The kernel Adaline algorithm is introduced and the first experimental results are given

    Mathematical Programming Potential Functions: New Algorithms for Nonlinear Function Approximation

    Get PDF
    Linear and quadratic-programming perceptrons for regression are new potential function methods for nonlinear function approximation. Potential function perceptrons, which have been proposed by Aizerman and colleagues in the early 1960's work in the following way: in the first stage patterns from the training set are mapped into a very high dimensional linearisation space by performing a high dimensional non-linear expansion of training vectors into the so-called linearisation space. In this space the perceptron's design function is determined. In the algorithms proposed in this work a non-linear prediction function is constructed using linear-or quadratic-programming routines to optimize the convex cost function. In Linear Programming Machines the L1 loss function is minimised, while Quadratic Programming Machines allow the minimisation of the L2 cost function, or a mixture of both the L1 and L2 noise models. Regularisation is implicitly performed by the expansion into linearisation space by choosing a suitable kernel function), additionally weight decay regularisation is available. First experimental results for one-dimensional curve-fitting using linear programming machines demonstrate the performance of the new method
    • …
    corecore