11,808 research outputs found

    EEOC v. Von Maur, Inc.

    Get PDF

    The PISCES 2 parallel programming environment

    Get PDF
    PISCES 2 is a programming environment for scientific and engineering computations on MIMD parallel computers. It is currently implemented on a flexible FLEX/32 at NASA Langley, a 20 processor machine with both shared and local memories. The environment provides an extended Fortran for applications programming, a configuration environment for setting up a run on the parallel machine, and a run-time environment for monitoring and controlling program execution. This paper describes the overall design of the system and its implementation on the FLEX/32. Emphasis is placed on several novel aspects of the design: the use of a carefully defined virtual machine, programmer control of the mapping of virtual machine to actual hardware, forces for medium-granularity parallelism, and windows for parallel distribution of data. Some preliminary measurements of storage use are included

    Methods for design and evaluation of integrated hardware-software systems for concurrent computation

    Get PDF
    Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed

    Some neglected axioms in fair division

    Get PDF
    Conditions one might impose on fair allocation procedures are introduced. Nondiscrimination requires that agents share an item in proportion to their entitlements if they receive nothing else. The "price" procedures of Pratt (2007), including the Nash bargaining procedure, satisfy this. Other prominent efficient procedures do not. In two-agent problems, reducing the feasible set between the solution and one agent's maximum point increases the utility cost to that agent of providing any given utility gain to the other and is equivalent to decreasing the dispersion of the latter's values for the items he does not receive without changing their total. One-agent monotonnicity requires that such a change should not hurt the first agent, limited monotonicity that the solution should not change. For prices, the former implies convexity in the smaller of the two valuations, the latter linearity. In either case, the price is at least their average and hence spiteful.Fair division, efficient allocation, nondiscrimination axiom, monotonicity axioms, envy-free, spite, bargaining solutions.

    Methods for design and evaluation of integrated hardware/software systems for concurrent computation

    Get PDF
    Two testbed programming environments to support the evaluation of a large range of parallel architectures have been implemented under the program Parallel Implementation of Scientific Computing Environments (PISCES). The PISCES 1 environment was applied to two areas of aerospace interest: a sparse matrix iterative equation solver and a dynamic scene analysis system. Currently, the NICE/SPAR testbed system for structural analysis is being modified for parallel operation under PISCES 2; the PISCES 1 applications are also being adapted for PISCES 2. A new formal model of concurrent computation has been developed, based on the mathematical system known as H graph semantics together with a timed Petri net model of the parallel aspects of a system

    PISCES: An environment for parallel scientific computation

    Get PDF
    The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data

    XMM-Newton observations of three poor clusters: Similarity in dark matter and entropy profiles down to low mass

    Full text link
    (Abridged) We present an analysis of the mass and entropy profiles of three poor clusters (A1991, A2717 and MKW9) observed with XMM-Newton. The clusters have similar temperatures (kT=2.65, 2.53 and 2.58 keV), and similar redshifts (0.04 < z < 0.06). We trace the surface brightness, temperature, entropy and integrated mass profiles up to 0.5 (0.35 for MKW9) of the virial radius (r_200). The integrated mass profiles are very similar in physical units and are reasonably well fitted with the NFW mass model with concentration parameters of c_200=4-6 and M_200=1.2-1.6 X 10^14 h_70^-1 \msun. The entropy profiles are similar at large scale, but there is some scatter in the central region (r<50 kpc). None of the clusters has an isentropic core. Including XMM data on A1983 (kT=2.2 keV), and A1413 (kT = 6.5 keV), we discuss the structural and scaling properties of cluster mass and entropy profiles. The scaled mass profiles display <20% dispersion in the 0.05 - 0.5 r_200 radial range. The c_200 parameters of these clusters, and other values from the literature, are fully consistent with the c_200 - M_200 relation derived from simulations. The dispersion in scaled entropy profiles is small, implying self-similarity down to low mass (kT ~2 keV), and is reduced by 30-40% (to ~20%) if we use the empirical relation S \propto T^0.65 instead of the standard self-similar relation, S \propto T. The mean scaled profile is well fitted by a power law for 0.05 < r_200 < 0.5, with a slope slightly lower than expected from pure shock heating (\alpha = 0.94+/-0.14), and a normalisation at 0.1 r_200 consistent with previous studies. The gas history thus likely depends both on gravitational processes and the interplay between cooling and various galaxy feedback mechanisms.Comment: Final refereed version to appear in A&A. Minor changes. 15 pages, 12 figures (Figs 1 & 3 low res

    A model for the distributed storage and processing of large arrays

    Get PDF
    A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described
    corecore