26,928 research outputs found

    The Nonlinear Sigma Model With Distributed Adaptive Mesh Refinement

    Full text link
    An adaptive mesh refinement (AMR) scheme is implemented in a distributed environment using Message Passing Interface (MPI) to find solutions to the nonlinear sigma model. Previous work studied behavior similar to black hole critical phenomena at the threshold for singularity formation in this flat space model. This work is a follow-up describing extensions to distribute the grid hierarchy and presenting tests showing the correctness of the model.Comment: 6 pages, 5 figure

    Differences in pain, function and coping in Multidimensional Pain Inventory subgroups of chronic back pain: a one-group pretest-posttest study

    Get PDF
    Contains fulltext : 97819.pdf (publisher's version ) (Open Access)BACKGROUND: Patients with non-specific back pain are not a homogeneous group but heterogeneous with regard to their bio-psycho-social impairments. This study examined a sample of 173 highly disabled patients with chronic back pain to find out how the three subgroups based on the Multidimensional Pain Inventory (MPI) differed in their response to an inpatient pain management program. METHODS: Subgroup classification was conducted by cluster analysis using MPI subscale scores at entry into the program. At program entry and at discharge after four weeks, participants completed the MPI, the MOS Short Form-36 (SF-36), the Hospital Anxiety and Depression Scale (HADS), and the Coping Strategies Questionnaire (CSQ). Pairwise analyses of the score changes of the mentioned outcomes of the three MPI subgroups were performed using the Mann-Whitney-U-test for significance. RESULTS: Cluster analysis identified three MPI subgroups in this highly disabled sample: a dysfunctional, interpersonally distressed and an adaptive copers subgroup. The dysfunctional subgroup (29% of the sample) showed the highest level of depression in SF-36 mental health (33.4 +/- 13.9), the interpersonally distressed subgroup (35% of the sample) a modest level of depression (46.8 +/- 20.4), and the adaptive copers subgroup (32% of the sample) the lowest level of depression (57.8 +/- 19.1). Significant differences in pain reduction and improvement of mental health and coping were observed across the three MPI subgroups, i.e. the effect sizes for MPI pain reduction were: 0.84 (0.44-1.24) for the dysfunctional subgroup, 1.22 (0.86-1.58) for the adaptive copers subgroup, and 0.53 (0.24-0.81) for the interpersonally distressed subgroup (p = 0.006 for pairwise comparison). Significant score changes between subgroups concerning activities and physical functioning could not be identified. CONCLUSIONS: MPI subgroup classification showed significant differences in score changes for pain, mental health and coping. These findings underscore the importance of assessing individual differences to understand how patients adjust to chronic back pain

    A New MHD Code with Adaptive Mesh Refinement and Parallelization for Astrophysics

    Full text link
    A new code, named MAP, is written in Fortran language for magnetohydrodynamics (MHD) calculation with the adaptive mesh refinement (AMR) and Message Passing Interface (MPI) parallelization. There are several optional numerical schemes for computing the MHD part, namely, modified Mac Cormack Scheme (MMC), Lax-Friedrichs scheme (LF) and weighted essentially non-oscillatory (WENO) scheme. All of them are second order, two-step, component-wise schemes for hyperbolic conservative equations. The total variation diminishing (TVD) limiters and approximate Riemann solvers are also equipped. A high resolution can be achieved by the hierarchical block-structured AMR mesh. We use the extended generalized Lagrange multiplier (EGLM) MHD equations to reduce the non-divergence free error produced by the scheme in the magnetic induction equation. The numerical algorithms for the non-ideal terms, e.g., the resistivity and the thermal conduction, are also equipped in the MAP code. The details of the AMR and MPI algorithms are described in the paper.Comment: 44 pages, 16 figure

    A Parallel Mesh-Adaptive Framework for Hyperbolic Conservation Laws

    Full text link
    We report on the development of a computational framework for the parallel, mesh-adaptive solution of systems of hyperbolic conservation laws like the time-dependent Euler equations in compressible gas dynamics or Magneto-Hydrodynamics (MHD) and similar models in plasma physics. Local mesh refinement is realized by the recursive bisection of grid blocks along each spatial dimension, implemented numerical schemes include standard finite-differences as well as shock-capturing central schemes, both in connection with Runge-Kutta type integrators. Parallel execution is achieved through a configurable hybrid of POSIX-multi-threading and MPI-distribution with dynamic load balancing. One- two- and three-dimensional test computations for the Euler equations have been carried out and show good parallel scaling behavior. The Racoon framework is currently used to study the formation of singularities in plasmas and fluids.Comment: late submissio

    Hierarchical Dynamic Loop Self-Scheduling on Distributed-Memory Systems Using an MPI+MPI Approach

    Full text link
    Computationally-intensive loops are the primary source of parallelism in scientific applications. Such loops are often irregular and a balanced execution of their loop iterations is critical for achieving high performance. However, several factors may lead to an imbalanced load execution, such as problem characteristics, algorithmic, and systemic variations. Dynamic loop self-scheduling (DLS) techniques are devised to mitigate these factors, and consequently, improve application performance. On distributed-memory systems, DLS techniques can be implemented using a hierarchical master-worker execution model and are, therefore, called hierarchical DLS techniques. These techniques self-schedule loop iterations at two levels of hardware parallelism: across and within compute nodes. Hybrid programming approaches that combine the message passing interface (MPI) with open multi-processing (OpenMP) dominate the implementation of hierarchical DLS techniques. The MPI-3 standard includes the feature of sharing memory regions among MPI processes. This feature introduced the MPI+MPI approach that simplifies the implementation of parallel scientific applications. The present work designs and implements hierarchical DLS techniques by exploiting the MPI+MPI approach. Four well-known DLS techniques are considered in the evaluation proposed herein. The results indicate certain performance advantages of the proposed approach compared to the hybrid MPI+OpenMP approach

    rDLB: A Novel Approach for Robust Dynamic Load Balancing of Scientific Applications with Parallel Independent Tasks

    Full text link
    Scientific applications often contain large and computationally intensive parallel loops. Dynamic loop self scheduling (DLS) is used to achieve a balanced load execution of such applications on high performance computing (HPC) systems. Large HPC systems are vulnerable to processors or node failures and perturbations in the availability of resources. Most self-scheduling approaches do not consider fault-tolerant scheduling or depend on failure or perturbation detection and react by rescheduling failed tasks. In this work, a robust dynamic load balancing (rDLB) approach is proposed for the robust self scheduling of independent tasks. The proposed approach is proactive and does not depend on failure or perturbation detection. The theoretical analysis of the proposed approach shows that it is linearly scalable and its cost decrease quadratically by increasing the system size. rDLB is integrated into an MPI DLS library to evaluate its performance experimentally with two computationally intensive scientific applications. Results show that rDLB enables the tolerance of up to (P minus one) processor failures, where P is the number of processors executing an application. In the presence of perturbations, rDLB boosted the robustness of DLS techniques up to 30 times and decreased application execution time up to 7 times compared to their counterparts without rDLB
    • …
    corecore