235 research outputs found
Process of communication in public accounting; Management of an accounting practice bulletin, MAP 21
https://egrove.olemiss.edu/aicpa_news/1248/thumbnail.jp
How to improve staff member motivation; Management of an accounting practice bulletin, MAP 20
https://egrove.olemiss.edu/aicpa_news/1247/thumbnail.jp
Pynamic: the Python Dynamic Benchmark
Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic
CLOMP: Accurately Characterizing OpenMP Application Overheads
Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC. Further, we show that CLOMP also captures limitations for OpenMP parallelization on NUMA systems
Recommended from our members
Report on New Capabilities for the Purple Development Environment
As part of the deliverables for the Development Environment for Purple, additional capabilities to improve the tools offerings and to address unique Purple system requirements, such as increased processor count, were expected. This report details some of the new capabilities that have been incorporated into the development environment tools for Purple. The shift on Purple to 64-bit applications (from 32-bit on White) initially broke many debugging and memory tools. Most tools were updated to support 64 bit well before Purple was delivered to LLNL, but the company that provided the popular heavy-weight 32-bit AIX memory tool, ZeroFault, was reluctant to port to 64 bit due to perceived lack of market. LLNL tried offering financial incentives to the ZeroFault developers, which were turned down, but eventually they did give vague promises to try to port to AIX 64-bit mode when they got time. The ZeroFault developers have been making intermittent and very slow progress over the last two plus years, but despite getting close, have not released a version of ZeroFault that yet meets our needs for 64-bit applications. However, given the critical need for memory tools and the uncertainty of ZeroFault development, other memory tool options were actively pursued and delivered
What scientific applications can benefit from hardware transactional memory?
Achieving efficient and correct synchronization of multiple threads is a difficult and error-prone task at small scale and, as we march towards extreme scale computing, will be even more challenging when the resulting application is supposed to utilize millions of cores efficiently. Transactional Memory (TM) is a promising technique to ease the burden on the programmer, but only recently has become available on commercial hardware in the new Blue Gene/Q system and hence the real benefit for realistic applications has not been studied, yet. This paper presents the first performance results of TM embedded into OpenMP on a prototype system of BG/Q and characterizes code properties that will likely lead to benefits when augmented with TM primitives. We first, study the influence of thread count, environment variables and memory layout on TM performance and identify code properties that will yield performance gains with TM. Second, we evaluate the combination of OpenMP with multiple synchronization primitives on top of MPI to determine suitable task to thread ratios per node. Finally, we condense our findings into a set of best practices. These are applied to a Monte Carlo Benchmark and a Smoothed Particle Hydrodynamics method. In both cases an optimized TM version, executed with 64 threads on one node, outperforms a simple TM implementation. MCB with optimized TM yields a speedup of 27.45 over baseline
Recommended from our members
Report on Challenges and Resolutions for the Purple Development Environment
Previous AIX development environment experience with ASC White and Early Delivery systems UV and UM was leveraged to provide a smooth and robust transition to the Purple development environment. Still, there were three major changes that initially caused serious problems for Purple users. The first was making 64-bit builds of executables the default instead of 32-bit. The second was requiring all executables to use large page memory. The third was the phase-out of the popular, but now defunct, third-party C++ compiler KCC, which required the migration of many codes to IBM's xlC C++ compiler. On Purple, the default build environment changed from 32-bit builds to 64-bit builds in order to enable executables to use the 4GB per processor (32GB per node) memory available, and in order for the MPI library to do collective optimizations that required the larger 64-bit address space. The 64-bit build environment was made default by setting the IBM environment variable OBJECT{_}MODE to 64 and wrapping third-party software (mainly the gnu compilers) in order to make them handle OBJECT{_}MODE properly. Because not all applications could port to 64-bit right away, (usually due to third-party constraints, such as python not supporting 64-bit AIX builds until very recently), 32-bit builds of the major common third-party libraries also had be supported. This combined 32/64 bit build support was accomplished fairly seamlessly using the AIX feature that allows both 32-bit and 64-bit versions of the code to appear in the same library file, and documentation with clear examples helped our library developers generate the required combined 32-bit and 64-bit libraries for Purple. In general, the port to 64-bit AIX executables went smoothly. The most common problem encountered with 64-bit was that many C codes didn't prototype malloc everywhere, via ''include <stdlib.h>'', which caused invalid pointers to be returned by unprototyped malloc calls. This was usually seen in old crusty C libraries, leading to segfault on first use of the invalid pointer. Users had not encountered this prototype issue on other 64-bit Operating Systems (Tru64 and SUN) because those vendors worked around this issue by ''auto-prototyping'' malloc for the user. IBM instead required a compiler option to be thrown for autoprototyping. This issue was resolved with user education, and often a quick recognition of the symptoms by support personnel. This addresses a requirement for a report on problems encountered with the tools and environment, and the resolution or status
Recommended from our members
What Scientific Applications can Benefit from Hardware Transactional Memory? - Early experience from a commercially available HTM system.
- …