936,353 research outputs found

    Active Width at a Slanted Active Boundary in Directed Percolation

    Full text link
    The width W of the active region around an active moving wall in a directed percolation process diverges at the percolation threshold p_c as W \simeq A \epsilon^{-\nu_\parallel} \ln(\epsilon_0/\epsilon), with \epsilon=p_c-p, \epsilon_0 a constant, and \nu_\parallel=1.734 the critical exponent of the characteristic time needed to reach the stationary state \xi_\parallel \sim \epsilon^{-\nu_\parallel}. The logarithmic factor arises from screening of statistically independent needle shaped sub clusters in the active region. Numerical data confirm this scaling behaviour.Comment: 5 pages, 5 figure

    Optimal expression evaluation for data parallel architectures

    Get PDF
    A data parallel machine represents an array or other composite data structure by allocating one processor (at least conceptually) per data item. A pointwise operation can be performed between two such arrays in unit time, provided their corresponding elements are allocated in the same processors. If the arrays are not aligned in this fashion, the cost of moving one or both of them is part of the cost of the operation. The choice of where to perform the operation then affects this cost. If an expression with several operands is to be evaluated, there may be many choices of where to perform the intermediate operations. An efficient algorithm is given to find the minimum-cost way to evaluate an expression, for several different data parallel architectures. This algorithm applies to any architecture in which the metric describing the cost of moving an array is robust. This encompasses most of the common data parallel communication architectures, including meshes of arbitrary dimension and hypercubes. Remarks are made on several variations of the problem, some of which are solved and some of which remain open

    Target Selection by Frontal Cortex During Coordinated Saccadic and Smooth Pursuit Eye Movement

    Full text link
    Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth pursuit eye movements. In particular, the saccadic and smooth pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do multiple brain regions interact, including frontal cortical areas, to decide the choice of a target among several competing moving stimuli? How is target selection information that is created by a bias (e.g., electrical stimulation) transferred from one movement system to another? These saccade-pursuit interactions are clarified by a new computational neural model, which describes interactions among motion processing areas MT, MST, FPA, DLPN; saccade specification, selection, and planning areas LIP, FEF, SNr, SC; the saccadic generator in the brain stem; and the cerebellum. Model simulations explain a broad range of neuroanatomical and neurophysiological data. These results are in contrast with the simplest parallel model with no interactions between saccades and pursuit than common-target selection and recruitment of shared motoneurons. Actual tracking episodes in primates reveal multiple systematic deviations from predictions of the simplest parallel model, which are explained by the current model.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Vision-Based Road Detection in Automotive Systems: A Real-Time Expectation-Driven Approach

    Full text link
    The main aim of this work is the development of a vision-based road detection system fast enough to cope with the difficult real-time constraints imposed by moving vehicle applications. The hardware platform, a special-purpose massively parallel system, has been chosen to minimize system production and operational costs. This paper presents a novel approach to expectation-driven low-level image segmentation, which can be mapped naturally onto mesh-connected massively parallel SIMD architectures capable of handling hierarchical data structures. The input image is assumed to contain a distorted version of a given template; a multiresolution stretching process is used to reshape the original template in accordance with the acquired image content, minimizing a potential function. The distorted template is the process output.Comment: See http://www.jair.org/ for any accompanying file

    Regularizing made-to-measure particle models of galaxies

    Full text link
    Made-to-measure methods such as the parallel code NMAGIC are powerful tools to build galaxy models reproducing observational data. They work by adapting the particle weights in an N-body system until the target observables are well matched. Here we introduce a moving prior regularization (MPR) method for such particle models. It is based on determining from the particles a distribution of priors in phase-space, which are updated in parallel with the weight adaptation. This method allows one to construct smooth models from noisy data without erasing global phase-space gradients. We first apply MPR to a spherical system for which the distribution function can in theory be uniquely recovered from idealized data. We show that NMAGIC with MPR indeed converges to the true solution with very good accuracy, independent of the initial particle model. Compared to the standard weight entropy regularization, biases in the anisotropy structure are removed and local fluctuations in the intrinsic distribution function are reduced. We then investigate how the uncertainties in the inferred dynamical structure increase with less complete and noisier kinematic data, and how the dependence on the initial particle model also increases. Finally, we apply the MPR technique to the two intermediate-luminosity elliptical galaxies NGC 4697 and NGC 3379, obtaining smoother dynamical models in luminous and dark matter potentials.Comment: 16 pages, 15 figures, 2 tables. Accepted for publication in MNRA

    Parallel Implementation of the PHOENIX Generalized Stellar Atmosphere Program. II: Wavelength Parallelization

    Get PDF
    We describe an important addition to the parallel implementation of our generalized NLTE stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000--300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard MPI library calls and is fully portable between serial and parallel computers.Comment: AAS-TeX, 15 pages, full text with figures available at ftp://calvin.physast.uga.edu/pub/preprints/Wavelength-Parallel.ps.gz ApJ, in pres
    corecore