4,490 research outputs found

    Dynamic remapping of parallel computations with varying resource demands

    Get PDF
    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity

    Statistical methodologies for the control of dynamic remapping

    Get PDF
    Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust

    Principles for problem aggregation and assignment in medium scale multiprocessors

    Get PDF
    One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior

    Optimal pre-scheduling of problem remappings

    Get PDF
    A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal

    An analysis of scatter decomposition

    Get PDF
    A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance

    Designed-in security for cyber-physical systems

    Get PDF
    An expert from academia, one from a cyber-physical system (CPS) provider, and one from an end asset owner and user offer their different perspectives on the meaning and challenges of 'designed-in security.' The academic highlights foundational issues and talks about emerging technology that can help us design and implement secure software in CPSs. The vendor's view includes components of the academic view but emphasizes the secure system development process and the standards that the system must satisfy. The user issues a call to action and offers ideas that will ensure progress

    High pressure cosmochemistry applied to major planetary interiors: Experimental studies

    Get PDF
    The overall goal of this project is to determine properties of the H-He-C-N-O system, as represented by small molecules composed of these elements, that are needed to constrain theoretical models of the interiors of the major planets. Much of our work now concerns the H2O-NH3 system. This project is the first major effort to measure phase equilibria in binary fluid-solid systems in diamond anvil cells. Vibrational spectroscopy, direct visual observations, and X-ray crystallography of materials confined in externally heated cells are our primary experimental probes. We also are collaborating with the shockwave physics group at Lawrence Livermore Laboratory in studies of the equation of state of a synthetic Uranus fluid and molecular composition of this and other H-C-N-O materials under planetary conditions

    Specific heat across the superconducting dome in the cuprates

    Full text link
    The specific heat of the superconducting cuprates is calculated over the entire phase diagram. A d-wave BCS approach based on the large Fermi surface of Fermi liquid and band structure theory provides a good description of the overdoped region. At underdoping it is essential to include the emergence of a second energy scale, the pseudogap and its associated Gutzwiller factor, which accounts for a reduction in the coherent piece of the electronic Green's function due to increased correlations as the Mott insulating state is approached. In agreement with experiment, we find that the slope of the linear in T dependence of the low temperature specific heat rapidly increases above optimum doping while it is nearly constant below optimum. Our theoretical calculations also agree with recent data on Bi2_2Sr2−x_{2-\rm x}Lax_{\rm x}CuO6+δ_{6+\delta} for which the normal state is accessed through the application of a large magnetic field. A quantum critical point is located at a doping slightly below optimum.Comment: submitted to PRB; 8 pages, 5 figure

    Polynomial loss of memory for maps of the interval with a neutral fixed point

    Full text link
    We give an example of a sequential dynamical system consisting of intermittent-type maps which exhibits loss of memory with a polynomial rate of decay. A uniform bound holds for the upper rate of memory loss. The maps may be chosen in any sequence, and the bound holds for all compositions.Comment: 16 page
    • …
    corecore