3,602 research outputs found

    Dynamic remapping of parallel computations with varying resource demands

    Get PDF
    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity

    Statistical methodologies for the control of dynamic remapping

    Get PDF
    Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust

    Principles for problem aggregation and assignment in medium scale multiprocessors

    Get PDF
    One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior

    Optimal pre-scheduling of problem remappings

    Get PDF
    A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal

    An analysis of scatter decomposition

    Get PDF
    A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance

    High pressure cosmochemistry applied to major planetary interiors: Experimental studies

    Get PDF
    The overall goal of this project is to determine properties of the H-He-C-N-O system, as represented by small molecules composed of these elements, that are needed to constrain theoretical models of the interiors of the major planets. Much of our work now concerns the H2O-NH3 system. This project is the first major effort to measure phase equilibria in binary fluid-solid systems in diamond anvil cells. Vibrational spectroscopy, direct visual observations, and X-ray crystallography of materials confined in externally heated cells are our primary experimental probes. We also are collaborating with the shockwave physics group at Lawrence Livermore Laboratory in studies of the equation of state of a synthetic Uranus fluid and molecular composition of this and other H-C-N-O materials under planetary conditions

    Polynomial loss of memory for maps of the interval with a neutral fixed point

    Full text link
    We give an example of a sequential dynamical system consisting of intermittent-type maps which exhibits loss of memory with a polynomial rate of decay. A uniform bound holds for the upper rate of memory loss. The maps may be chosen in any sequence, and the bound holds for all compositions.Comment: 16 page

    Considering the impact of situation-specific motivations and constraints in the design of naturally ventilated and hybrid buildings

    Get PDF
    A simple logical model of the interaction between a building and its occupants is presented based on the principle that if free to do so, people will adjust their posture, clothing or available building controls (windows, blinds, doors, fans, and thermostats) with the aim of achieving or restoring comfort and reducing discomfort. These adjustments are related to building design in two ways: first the freedom to adjust depends on the availability and ease-of-use of control options; second the use of controls affects building comfort and energy performance. Hence it is essential that these interactions are considered in the design process. The model captures occupant use of controls in response to thermal stimuli (too warm, too cold etc.) and non-thermal stimuli (e.g. desire for fresh air). The situation-specific motivations and constraints on control use are represented through trigger temperatures at which control actions occur, motivations are included as negative constraints and incorporated into a single constraint value describing the specifics of each situation. The values of constraints are quantified for a range of existing buildings in Europe and Pakistan. The integration of the model within a design flow is proposed and the impact of different levels of constraints demonstrated. It is proposed that to minimise energy use and maximise comfort in naturally ventilated and hybrid buildings the designer should take the following steps: 1. Provide unconstrained low energy adaptive control options where possible, 2. Avoid problems with indoor air quality which provide motivations for excessive ventilation rates, 3. Incorporate situation-specific adaptive behaviour of occupants in design simulations, 4. Analyse the robustness of designs against variations in patterns of use and climate, and 5. Incorporate appropriate comfort standards into the operational building controls (e.g. BEMS)

    Theory of the critical current in two-band superconductors with application to MgB2

    Full text link
    Using a Green's function formulation of the superfluid current j_s, where a momentum q_s is applied to the Cooper pair, we have calculated j_s as a function of q_s, temperature, and impurity scattering for a two-band superconductor. We consider both renormalized BCS and full strong-coupling Eliashberg theory. There are two peaks in the current as a function of q_s due to the two energy scales for the gaps and this can give rise to non-standard behavior for the critical current. The critical current j_c, which is given as the maximum in j_s, can exhibit a kink as a function of temperature as the maximum is transferred from one peak to other. Other temperature variations are also possible and the universal BCS behavior is violated. The details depend on the material parameters of the system, such as the amount of coupling between the bands, the gap anisotropy, the Fermi velocities, and the density of states of each band. The Ginzburg-Landau relation between j_c, the penetration depth lambda_L and thermodynamic critical field H_c, is modified. Using Eliashberg theory with the electron-phonon spectral densities given from bandstructure calculations, we have applied our calculations for j_s and j_c to the case of MgB2 and find agreement with experiment.Comment: 13 pages, 7 figures, submitted to PR
    • …
    corecore