1,374 research outputs found

    Scaling within the Spectral Function approach

    Full text link
    Scaling features of the nuclear electromagnetic response functions unveil aspects of nuclear dynamics that are crucial for interpretating neutrino- and electron-scattering data. In the large momentum-transfer regime, the nucleon-density response function defines a universal scaling function, which is independent of the nature of the probe. In this work, we analyze the nucleon-density response function of 12^{12}C, neglecting collective excitations. We employ particle and hole spectral functions obtained within two distinct many-body methods, both widely used to describe electroweak reactions in nuclei. We show that the two approaches provide compatible nucleon-density scaling functions that for large momentum transfers satisfy first-kind scaling. Both methods yield scaling functions characterized by an asymmetric shape, although less pronounced than that of experimental scaling functions. This asymmetry, only mildly affected by final state interactions, is mostly due to nucleon-nucleon correlations, encoded in the continuum component of the hole SF.Comment: 15 pages, 11 figure

    Pulsar Algorithms: A Class of Coarse-Grain Parallel Nonlinear Optimization Algorithms

    Get PDF
    Parallel architectures of modern computers formed of processors with high computing power motivate the search for new approaches to basic computational algorithms. Another motivating force for parallelization of algorithms has been the need to solve very large scale or complex problems. However, the complexity of a mathematical programming problem is not necessarily due to its scale or dimension; thus, we should search also for new parallel computation approaches to problems that might have a moderate size but are difficult for other reasons. One of such approaches might be coarse-grained parallelization based on a parametric imbedding of an algorithm and on an allocation of resulting algorithmic phases and variants to many processors with suitable coordination of data obtained that way. Each processor performs then a phase of the algorithm -- a substantial computational task which mitigates the problems related to data transmission and coordination. The paper presents a class of such coarse-grained parallel algorithms for unconstrained nonlinear optimization, called pulsar algorithms since the approximations of an optimal solution alternatively increase and reduce their spread in subsequent iterations. The main algorithmic phase of an algorithm of this class might be either a directional search or a restricted step determination in a trust region method. This class is exemplified by a modified, parallel Newton-type algorithm and a parallel rank-one variable metric algorithm. In the latter case, a consistent approximation of the inverse of the hessian matrix based on parallel produced data is available at each iteration, while the known deficiencies of a rank-one variable metric are suppressed by a parallel implementation. Additionally, pulsar algorithms might use a parametric imbedding into a family of regularized problems in order to counteract possible effects of ill-conditioning. Such parallel algorithms result not only in an increased speed of solving a problem but also in an increased robustness with respect to various sources of complexity of the problem. Necessary theoretical foundations, outlines of various variants of parallel algorithms and the results of preliminary tests are presented
    corecore