228 research outputs found

    Managing the learning agenda in a converged services environment

    Get PDF

    Dataflow methods in HPC, visualisation and analysis

    Get PDF
    The processing power available to scientists and engineers using supercomputers over the last few decades has grown exponentially, permitting significantly more sophisticated simulations, and as a consequence, generating proportionally larger output datasets. This change has taken place in tandem with a gradual shift in the design and implementation of simulation and post-processing software, with a shift from simulation as a first step and visualisation/analysis as a second, towards in-situ on the fly methods that provide immediate visual feedback, place less strain on file-systems and reduce overall data-movement and copying. Concurrently, processor speed increases have dramatically slowed and multi and many-core architectures have instead become the norm for virtually all High Performance computing (HPC) machines. This in turn has led to a shift away from the traditional distributed one rank per node model, to one rank per process, using multiple processes per multicore node, and then back towards one rank per node again, using distributed and multi-threaded frameworks combined. This thesis consists of a series of publications that demonstrate how software design for analysis and visualisation has tracked these architectural changes and pushed the boundaries of HPC visualisation using dataflow techniques in distributed environments. The first publication shows how support for the time dimension in parallel pipelines can be implemented, demonstrating how information flow within an application can be leveraged to optimise performance and add features such as analysis of time-dependent flows and comparison of datasets at different timesteps. A method of integrating dataflow pipelines with in-situ visualisation is subsequently presented, using asynchronous coupling of user driven GUI controls and a live simulation running on a supercomputer. The loose coupling of analysis and simulation allows for reduced IO, immediate feedback and the ability to change simulation parameters on the fly. A significant drawback of parallel pipelines is the inefficiency caused by improper load-balancing, particularly during interactive analysis where the user may select between different features of interest, this problem is addressed in the fourth publication by integrating a high performance partitioning library into the visualization pipeline and extending the information flow up and down the pipeline to support it. This extension is demonstrated in the third publication (published earlier) on massive meshes with extremely high complexity and shows that general purpose visualization tools such as ParaView can be made to compete with bespoke software written for a dedicated task. The future of software running on many-core architectures will involve task-based runtimes, with dynamic load-balancing, asynchronous execution based on dataflow graphs, work stealing and concurrent data sharing between simulation and analysis. The final paper of this thesis presents an optimisation for one such runtime, in support of these future HPC applications

    The last ditch: An organizational history of the Nazi Werwolf movement, 1944-45.

    Get PDF
    Near the end of World War Two, a National Socialist resistance movement briefly flickered to life in Germany and its borderlands. Dedicated to delaying the advance of the victorious Allies and Soviets, this guerrilla movement, the Werwolf, succeeded in scattered acts of sabotage and violence, and also began to assume the character of a vengeful Nazi reaction against the German populace itself; collaborators and "defeatists" were assassinated, and crude posters warned the population that certain death was the penalty for failure to resist the enemy. Participation in "scorched earth" measures gave the movement an almost Luddite character. In the final analysis, however, the Werwolf failed because of two basic weaknesses which undercut the movement. First, it lacked popular appeal, which doomed guerrillas and fanatic resisters to a difficult life on the margins of their own society; such an existence was simply not feasible in a country heavily occupied by enemy military forces. Second, the Werwolf was poorly organized, and showed all the signs of internal confusion that have been identified by the so-called "functionalist" school of German historiography. In fact, confusion and barbarism became worse as the bonds of military success which had united the Reich began to loosen and unravel; the Werwolf can perhaps serve as the ultimate construct in the "functionalist" model of the Third Reich. Although it failed, the Werwolf did have some permanent significance. While it is a classic example of guerrilla warfare gone wrong, the mere fact that it was active also caused a reaction among Germany's enemies. The Western Allies altered their own military and political policies to allow for extermination of the Werwolf threat, and it is likely that immediate security considerations also influenced the direction of Soviet policies in Germany

    Data Redistribution using One-sided Transfers to In-memory HDF5 Files

    Get PDF
    International audienceOutputs of simulation codes making use of the HDF5 file format are usually and mainly composed of several different attributes and datasets, storing either lightweight pieces of information or containing heavy parts of data. These objects, when written or read through the HDF5 layer, create metadata and data IO operations of different block sizes, which depend on the precision and dimension of the arrays that are being manipulated. By making use of simple block redistribution strategies, we present in this paper a case study showing HDF5 IO performance improvements for "in-memory" files stored in a distributed shared memory buffer using one-sided communications through the HDF5 API

    Visualization and analysis of SPH data

    Get PDF
    Advances in graphics hardware in recent years have led not only to a huge growth in the speed at which 3D data can be rendered, but also to a marked change in the way in which different data types can be displayed. In particular, point based rendering techniques have benefited from the advent of vertex and fragment shaders on the GPU which allow simple point primitives to be displayed not just as dots, but rather as complex entities in their own right. We present a simple way of displaying arbitrary 2D slices through 3D SPH data by evaluating the SPH kernel on the GPU and accumulating the contributions from individual particles intersecting a slice plane into a texture. The resulting textured plane can then be displayed alongside the particle based data. Combining 2D slices and 3D views in an interactive way improves perception of the underlying physics and speeds up the development cycle of simulation code. In addition to rendering particles themselves, we can improve visualization by generating particle trails to show motion history, glyphs to show vector fields, transparency to enhance or diminish areas of high/low interest and multiple views of the same or different data for comparative visualization. We combine these techniques with interactive control or arbitrary scalar parameters and animation through time to produce a feature rich environment for exploration of SPH data

    A global analysis of anthropogenic development of marine turtle nesting beaches

    Get PDF
    This is the final version. Available on open access from MDPI via the DOI in this recordThe Intergovernmental Panel on Climate Change predicts that sea levels will rise by up to 0.82 m in the next 100 years. In natural systems, coastlines would migrate landwards, but because most of the world's human population occupies the coast, anthropogenic structures (such as sea walls or buildings) have been constructed to defend the shore and prevent loss of property. This can result in a net reduction in beach area, a phenomenon known as "coastal squeeze", which will reduce beach availability for species such as marine turtles. As of yet, no global assessment of potential future coastal squeeze risk at marine turtle nesting beaches has been conducted. We used Google Earth satellite imagery to enumerate the proportion of beaches over the global nesting range of marine turtles that are backed by hard anthropogenic coastal development (HACD). Mediterranean and North American nesting beaches had the most HACD, while the Australian and African beaches had the least. Loggerhead and Kemp's ridley turtle nesting beaches had the most HACD, and flatback and green turtles the least. Future management approaches should prioritise the conservation of beaches with low HACD to mitigate future coastal squeeze

    Professional golf - A license to spend money? Issues of money in the lives of touring professional golfers

    Get PDF
    This is the authors' PDF version of an article which appeared online on 11/11/2014 in published in Journal of Sport and Social Issues© 2014. The definitive version is available at http:dx.doi.org/10.1177/0193723514557819Drawing upon figurational sociology, this paper examines issues of money that are central to touring professional golfers’ workplace experiences. Based on interviews with 16 professionals, results indicate the monetary rewards available for top golfers continues to increase, however, such recompense is available to relatively small numbers and the majority fare poorly. Results suggest that playing on tour with other like-minded golfers fosters internalized constraints relating to behaviour, referred to as ‘habitus’, whereby many players ‘gamble’ on pursuing golf as their main source of income despite the odds against them. Golfers are constrained to develop networks with sponsors for financial reasons which has left some players with conflicting choices between regular money, and adhering to restrictive contractual agreements, or the freedom to choose between different brands

    High performance computing 3D SPH model: Sphere impacting the free-surface of water

    Get PDF
    In this work, an analysis based on a three-dimensional parallelized SPH model developed by ECN and applied to free surface impact simulations is presented. The aim of this work is to show that SPH simulations can be performed on huge computer as EPFL IBM Blue Gene/L with 8'192 cores. This paper presents improvements concerning namely the memory consumption, which remains quite subtle because of the variable-H scheme constraints. These improvements have made possible the simulation of test cases involving tens of millions of particles computed by using more than thousand cores. Furthermore, pv-meshless developed by CSCS, is used to show the pressure field and the effect of impact

    Advanced visualization of large datasets for Discrete Element Method simulations

    Get PDF
    State-of-the-art Discrete Element Method (DEM) simulations of granular flows produce large datasets that contain a wealth of information describing the time-dependent physical state of the particulate medium. To extract this information, both comprehensive and efficient post-processing methods are essential. Special attention must be paid to the interactive visualization of these large hybrid datasets containing both particle-based and surface-based data. In this paper, we report the use of the open-source visualization package ParaView, which we have customized specifically to perform advanced techniques for the post-treatment of large DEM datasets. Particular attention is given to the method used to render the individual particles, based either on triangulation of glyphs or using GPU-accelerated primitives. A demonstration of these techniques, and their relative merits when applied to the visualization of DEM datasets, is presented via their application to real industrial examples
    • 

    corecore