1,258 research outputs found

    Dynamic remapping of parallel computations with varying resource demands

    Get PDF
    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity

    Statistical methodologies for the control of dynamic remapping

    Get PDF
    Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust

    Implementation of a parallel unstructured Euler solver on shared and distributed memory architectures

    Get PDF
    An efficient three dimensional unstructured Euler solver is parallelized on a Cray Y-MP C90 shared memory computer and on an Intel Touchstone Delta distributed memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between two differing architectures are made

    Deferred Data-Flow Analysis : Algorithms, Proofs and Applications

    Get PDF
    Loss of precision due to the conservative nature of compile-time dataflow analysis is a general problem and impacts a wide variety of optimizations. We propose a limited form of runtime dataflow analysis, called deferred dataflow analysis (DDFA), which attempts to sharpen dataflow results by using control-flow information that is available at runtime. The overheads of runtime analysis are minimized by performing the bulk of the analysis at compile-time and deferring only a summarized version of the dataflow problem to runtime. Caching and reusing of dataflow results reduces these overheads further. DDFA is an interprocedural framework and can handle arbitrary control structures including multi-way forks, recursion, separately compiled functions and higher-order functions. It is primarily targeted towards optimization of heavy-weight operations such as communication calls, where one can expect significant benefits from sharper dataflow analysis. We outline how DDFA can be used to optimize different kinds of heavy-weight operations such as bulk-prefetching on distributed systems and dynamic linking in mobile programs. We prove that DDFA is safe and that it yields better dataflow information than strictly compile-time dataflow analysis. (Also cross-referenced as UMIACS-TR-98-46

    Эффективность капецитабина по сравнению с 5-фторурацилом при раке толстой кишки и желудка: обновленный метаанализ выживаемости в шести клинических исследованиях

    Get PDF
    Оральный фторпиримидин — капецитабин — широко изучен в сравнительных исследованиях с вводимым внутривенно 5-фторурацилом как монотерапевтическое средство или в комплексном приме- нении при метастатическом колоректальном раке (МКРР) и метастатическом раке желудка (МРЖ). По рекомендации Европейских органов здравоохранения выполнен метаанализ эффективности применения капецитабина по сравнению с 5-фторурацилом при МКРР и МРЖ

    Optimizing Execution of Component-based Applications using Group Instances

    Get PDF
    Applications that query, analyze and manipulate very large data sets have become important consumers of resources. With the current trend toward collectively using heterogeneous collections of disparate machines (the Grid) for a single application, techniques used for tightly coupled, homogeneous machines are not sufficient. Recent research on programming models for developing applications in the Grid has proposed component-based models as a viable approach, in which an application is composed of multiple interacting computational objects. We have been developing a framework, called filter-stream programming, for building data-intensive applications in a distributed environment. In this model, the processing structure of an application is represented as a set of processing units, referred to as filters. In earlier work, we studied the effects of filter placement across heterogeneous host machines on the performance of the application. In this paper, we develop the problem of scheduling instances of a filter group running on the same set of hosts. A filter group is a set of filters collectively performing a computation for an application. In particular, we seek the answer to the following question: should a new instance be created, or an existing one reused? We experimentally investigate the effects of instantiating multiple filter groups on performance under varying application characteristics. (Cross-referenced as UMIACS-TR-2001-06

    Association of progression-free survival with patient-reported outcomes and survival: results from a randomised phase 3 trial of panitumumab

    Get PDF
    In a randomised phase 3 trial, panitumumab significantly improved progression-free survival (PFS) in patients with refractory metastatic colorectal cancer (mCRC). This analysis characterises the association of PFS with CRC symptoms, health-related quality of life (HRQoL), and overall survival (OS). CRC symptoms (NCCN/FACT CRC symptom index, FCSI) and HRQoL (EQ-5D) were assessed for 207 panitumumab patients and 184 best supportive care (BSC) patients who had at least one post-baseline patient-reported outcome (PRO) assessment. Patients alive at week 8 were included in the PRO and OS analyses and categorised by their week 8 progression status as follows: no progressive disease (no PD; best response of at least stable disease) vs progressive disease (PD). Standard imputation methods were used to assign missing values. Significantly more patients were progression free at weeks 8–24 with panitumumab vs BSC. After excluding responders, a significant difference in PFS remained favouring panitumumab (HR=0.63, 95% CI=0.52–0.77; P<0.0001). At week 8, lack of disease progression was associated with significantly and clinically meaningful lower CRC symptomatology for both treatment groups and higher HRQoL for panitumumab patients only. Overall survival favoured no PD patients vs PD patients alive at week 8. Lack of disease progression was associated with better symptom control, HRQoL, and OS
    corecore