118 research outputs found

    Efficient Task-Local I/O Operations of Massively Parallel Applications

    Get PDF
    Applications on current large-scale HPC systems use enormous numbers of processing elements for their computation and have access to large amounts of main memory for their data. Nevertheless, they still need file-system access to maintain program and application data persistently. Characteristic I/O patterns that produce a high load on the file system often occurduring access to checkpoint and restart files, which have to be frequently stored to allow the application to be restarted after program termination or system failure. On large-scale HPC systems with distributed memory, each application task will often perform such I/O individually by creating task-local file objects on the file system. At large scale, these I/O patterns impose substantial stress on the metadata management components of the I/O subsystem. For example, the simultaneous creation of thousands of task-local files in the same directory can cause delays of several minutes. Also at the startup of dynamically linked applications, such metadata contention occurs while searching for library files and induces a comparably high metadata load on the file system. Even mid-scale applications cause in such load scenarios startup delays of ten minutes or more. Therefore, dynamic linking and loading is nowadays not applied on large HPC systems, although dynamic linking has many advantages for managing large code bases. The reason for these limitations is that POSIX I/O and the dynamic loader are implemented as serial components of the operating system and do not take advantage of the parallel nature of the I/O operations. To avoid the above bottlenecks, this work describes two novel approaches for the integration of locality awareness (e.g., through aggregation or caching) into the serial I/O operations of parallel applications. The underlying methods are implemented in two tools, SIONlib\textit{SIONlib} and Spindle\textit{Spindle}, which exploit the knowledge of application parallelism to coordinate access to file-system objects. In addition, the applied methods also use knowledge of the underlying I/O subsystem structure, the parallel file system configuration, and the network betweenHPC-system and I/O system to optimize application I/O. Both tools add layers between the parallel application and the POSIX-based standard interfaces of the operating system for I/O and dynamic loading, eliminating the need for modifying the underlying system software. SIONlib is already applied in several applications, including PEPC, muphi, and MP2C, to implement efficient checkpointing. In addition, SIONlib is integrated in the performance-analysis tools Scalasca and Score-P to efficiently store and read trace data. Latest benchmarks on the Blue Gene/Q in Jülich demonstrate that SIONlib solves the metadata problem at large scale by running efficiently up to 1.8 million tasks while maintaining high I/O bandwidths of 60-80% of file-system peak with a negligible file-creation time. The scalability of Spindle could be demonstrated by running the Pynamic benchmark, a proxy benchmark for a real application, on a cluster of Lawrence Livermore National Laboratory at large scale. The results show that the startup of dynamically linked applications is now feasible on more than 15000 tasks, whereas the overhead of Spindle is nearly constantly low. With SIONlib and Spindle, this work demonstrates how scalability of operating system components can be improved without modifying them and without changing the I/O patterns of applications. In this way, SIONlib and Spindle represent prototype implementations of functionality needed by next-generation runtime systems

    Extreme-scaling Applications 24/7 on JUQUEEN Blue Gene/Q

    Get PDF
    Jülich Supercomputing Centre has offered Extreme Scaling Workshops since 2009, with the latest edition in February 2015 giving seven international code teams an opportunity to (im)prove the scaling of their applications to all 458752 cores of the JUQUEEN IBM BlueGene/Q. Each of them successfully adapted their application codes and datasets to the restricted compute-node memory and exploit the massive parallelism with up to 1.8 million processes or threads. They thereby qualified to become members of the High-Q Club which now has over 24 codes demonstrating extreme scalability. Achievements in both strong and weak scaling are compared, and complemented with a review of program languages and parallelisation paradigms, exploitation of hardware threads, and file I/O requirements

    Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing

    Get PDF
    This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing)ÃÂ. The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP. The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the user display of LLview. These monitoring features have to be integrated into the development environment. Besides showing the current status PTP's monitoring also needs to allow for submitting and canceling user jobs. Monitoring peta-scale systems especially deals with presenting the large amount of status data in a useful manner. Users require to select arbitrary levels of detail. The monitoring views have to provide a quick overview of the system state, but also need to allow for zooming into specific parts of the system, into which the user is interested in. At present, the major batch systems running on supercomputers are PBS, TORQUE, ALPS and LoadLeveler, which have to be supported by both the monitoring and the job controlling component. Finally, PTP needs to be designed as generic as possible, so that it can be extended for future batch systems

    Minijobs nach Einführung des Mindestlohns - eine Bestandsaufnahme

    Full text link
    Der vorliegende Artikel nutzt zwei Befragungen unter Arbeitnehmern und Arbeitgebern im Bereich der geringfügigen Beschäftigung für eine aktuelle Bestandsaufnahme der Minijobs, und vergleicht die Ergebnisse mit der Situation vor Einführung des gesetzlichen Mindestlohns. Im Vordergrund stehen hierbei die Arbeitsstunden, der Stundenlohn, Gründe für die Aufnahme bzw. für das Angebot von Minijobs, sowie die Arbeitsqualität in Form von Gewährung gesetzlich vorgeschriebener Leistungen, wie z.B. die Entgeltfortzahlung im Krankheitsfall. Insgesamt ergibt sich ein vielschichtiges Bild: Einerseits zeigen sich deutliche Verbesserungen bei den Löhnen und der Leistungsgewährung, und viele geringfügig Beschäftigte sind offenbar mit ihrem Minijob zufrieden. Andererseits sind auch nach Einführung des Mindestlohns noch Löhne unter 8,50 Euro zu beobachten, und die Sprungbrettfunktion von Minijobs hat weiter an Bedeutung verloren.This article provides an overview of the current condition of marginal employees in the labor market. Using data on minijobs collected from both employers and employees, we further compare the results before and after the introduction of the minimum wage. The main focus is on hours worked, hourly wages, and the motivation behind offering or accepting mini jobs. As well, we compare the work quality of minijobs as measured by the provision of benefits, e.g., the provision of paid sick leave. The results show a complex picture: On the one hand, there are improvements in terms of wages, benefit provision, and the marginal employees' job satisfaction. On the other hand, after the introduction of the minimum wage, wages below 8.50 Euros can still be observed, and the steppingstone function of mini jobs has lost importance

    Nachfolgestudie zur Analyse der geringfügigen Beschäftigungsverhältnisse (Minijobs) sowie den Auswirkungen des gesetzlichen Mindestlohns. Endbericht: Gutachten im Auftrag des Ministeriums für Arbeit, Integration und Soziales des Landes Nordrhein-Westfalen

    Full text link
    [Einleitung ...] Vor diesem Hintergrund hat das MAIS eine Studie in Auftrag gegeben, um eine aktuelle Bestandsaufnahme der geringfügigen Beschäftigung in NRW durchzuführen. Dabei geht es einerseits darum zu überprüfen, ob sich die Situation im Vergleich zu den Berichtsergebnissen 2012 verändert hat. Hierbei stehen die Fragen im Vordergrund, welche Personen einem Minijob nachgehen und welche Firmen Minijobs anbieten, und aus welchen Beweggründen dies jeweils geschieht. Andererseits soll der Frage nachgegangen werden, wie sich die Beschäftigungssituation der geringfügig Beschäftigten seit 2012 verändert hat, vor allem hinsichtlich des gezahlten Lohns, der Einhaltung von Arbeitnehmerrechten und dem Übergang in sozialversicherungspflichtige Beschäftigung. Von besonderem Interesse in diesem Zusammenhang ist, ob die oben genannten veränderten Rahmenbedingungen sowie die ergriffenen Politikmaßnahmen einen Einfluss auf die Situation der geringfügig Beschäftigten gehabt haben. Auch wenn die vorliegende Studie hierzu keine kausale Evidenz liefern kann, gibt sie doch einige Hinweise, die diesbezügliche Rückschlüsse zulassen. Um die genannten Fragen zu beantworten, wurden im August/September 2016 - analog zu den Befragungen im Jahr 2012 - zwei NRW-weite Befragungen durchgeführt, eine unter geringfügig Beschäftigten und eine unter Arbeitgebern, die geringfügige Beschäftigungsverhältnisse aufweisen. Die Ergebnisse dieser Befragungen sind in der vorliegenden Studie enthalten, die wie folgt aufgebaut ist. Das folgende Kapitel gibt einen kurzen Überblick über die rechtlichen Rahmenbedingungen der geringfügigen Beschäftigung sowie die derzeitige Situation bei Minijobs, sowohl in Deutschland als auch in NRW. Hierbei werden aktuelle Entwicklungen anhand entsprechender Daten aufgezeigt und die zum Thema existierende Literatur diskutiert. Kapitel 3 enthält Details zur Methodik der Befragung und diskutiert deren Repräsentativität. In Kapitel 4 und 5 werden die Ergebnisse der Befragungen von Arbeitnehmern und Arbeitgebern präsentiert. Das abschließende Kapitel 6 fasst die wichtigsten Ergebnisse zusammen und zieht wirtschaftspolitische Schlussfolgerungen

    Техногенные месторождения, сформировавшиеся на объектах горнопромышленного производства в Хакасии

    Get PDF
    Охарактеризованы некоторые техногенные объекты горнопромышленных предприятий Хакасии, являющиеся источником воздействия на окружающую среду и перспективные для изучения на предмет получения дополнительной продукции

    The calibration and evaluation of speed-dependent automatic zooming interfaces.

    Get PDF
    Speed-Dependent Automatic Zooming (SDAZ) is an exciting new navigation technique that couples the user's rate of motion through an information space with the zoom level. The faster a user scrolls in the document, the 'higher' they fly above the work surface. At present, there are few guidelines for the calibration of SDAZ. Previous work by Igarashi & Hinckley (2000) and Cockburn & Savage (2003) fails to give values for predefined constants governing their automatic zooming behaviour. The absence of formal guidelines means that SDAZ implementers are forced to adjust the properties of the automatic zooming by trial and error. This thesis aids calibration by identifying the low-level components of SDAZ. Base calibration settings for these components are then established using a formal evaluation recording participants' comfortable scrolling rates at different magnification levels. To ease our experiments with SDAZ calibration, we implemented a new system that provides a comprehensive graphical user interface for customising SDAZ behaviour. The system was designed to simplify future extensions---for example new components such as interaction techniques and methods to render information can easily be added with little modification to existing code. This system was used to configure three SDAZ interfaces: a text document browser, a flat map browser and a multi-scale globe browser. The three calibrated SDAZ interfaces were evaluated against three equivalent interfaces with rate-based scrolling and manual zooming. The evaluation showed that SDAZ is 10% faster for acquiring targets in a map than rate-based scrolling with manual zooming, and SDAZ is 4% faster for acquiring targets in a text document. Participants also preferred using automatic zooming over manual zooming. No difference was found for the globe browser for acquisition time or preference. However, in all interfaces participants commented that automatic zooming was less physically and mentally draining than manual zooming

    VISIT - a Visualization Interface Toolkit, Version 1.0

    Get PDF
    With the increasing capabilities of both supercomputers and graphical workstations new modes of operation become feasible for numerical simulations that are traditionally performed in batch processing. Connecting a workstation to a compute-server allows for interactive monitoring (online-visualization) and control (computational steering, interactive simulation) of such simulations. Typical issues are the extracting of data and status information from a running simulation, the dynamically changing of parameters, the dynamically attaching to and detaching of the visualization from the simulation and the recording and replaying of simulation results.VISIT is a library that supports the development of interactive simulations. It provides functions for establishing a connection between a simulation and a visualization, exchanging data and eventually shutting down the connection again. VISIT is developed in the Central Institute for Applied Mathematics at the Research Centre Juelich.VISIT uses a simple client-server approach. That means that no central server or data manager is involved. Data is exchanged directly between a simulation (the client) and a visualization (the server). The only third party that comes into play is a directory server that is used for exchanging contact information.VISIT provides support for AVS/Express and Perl/Tk visualization systems and C, Fortran, and Perl language bindings
    corecore