21 research outputs found
ACSys/RDN experiences with Telstra’s experimental broadband network, first progress report
This report summarises our experiences with the EBN and provides an indication of where we are now. We don’t present a set of detailed performance measurements in this report, instead we focus primarily on bandwidth utilisation and network management. We are currently producing a more comprehensive set of performance measurements, which will be presented in a subsequent report
LFRic: meeting the challenges of scalability and performance portability in weather and climate models
This paper describes LFRic: the new weather and climate modelling
system being developed by the UK Met Office to replace the existing
Unified Model in preparation for exascale computing in the 2020s.
LFRic uses the GungHo dynamical core and runs on a semi-structured
cubed-sphere mesh. The design of the supporting infrastructure follows
object-oriented principles to facilitate modularity and the use of
external libraries where possible. In particular, a `separation of concerns'
between the science code and parallel code is imposed to promote
performance portability. An application called PSyclone, developed at the
STFC Hartree centre, can generate the parallel code enabling deployment of
a single source science code onto different machine architectures.
This paper provides an overview of the scientific requirement, the design
of the software infrastructure, and examples of PSyclone usage. Preliminary
performance results show strong scaling and an indication that hybrid
MPI/OpenMP performs better than pure MPI
The meteorology of Black Saturday
The meteorological conditions are investigated over the state of Victoria, Australia on 7 February 2009, the day of the 'Black Saturday' fires. Daytime temperatures exceeding 45°C, strong surface winds and extremely dry conditions combined to produce the worst fire weather conditions on record. A high-resolution nested simulation with the UK Met Office Unified Model and available observations are used to identify the important mesoscale features of the day. The highest resolution domain has horizontal grid spacing of 444 m and reproduces most aspects of the observed meteorological conditions. These include organized horizontal convective rolls, a strong late-afternoon cool change with many of the characteristics of an unsteady gravity current, a weaker late-evening cold front and propagating nocturnal bores. These mesoscale phenomena introduce variability in the winds, temperature and humidity at short temporal and spatial scales, which in turn lead to large spatial and temporal variability in fire danger
Range and stopping power dependence of heavy ion-induced demagnetizations of ferromagnetic materials
Crossing the chasm: how to develop weather and climate models for next generation computers?
Weather and climate models are complex pieces of software which include many individual components, each of which is evolving under pressure to exploit advances in computing to enhance some combination of a range of possible improvements (higher spatio-temporal resolution, increased fidelity in terms of resolved processes, more quantification of uncertainty, etc.). However, after many years of a relatively stable computing environment with little choice in processing architecture or programming paradigm (basically X86 processors using MPI for parallelism), the existing menu of processor choices includes significant diversity, and more is on the horizon. This computational diversity, coupled with ever increasing software complexity, leads to the very real possibility that weather and climate modelling will arrive at a chasm which will separate scientific aspiration from our ability to develop and/or rapidly adapt codes to the available hardware.
In this paper we review the hardware and software trends which are leading us towards this chasm, before describing current progress in addressing some of the tools which we may be able to use to bridge the chasm. This brief introduction to current tools and plans is followed by a discussion outlining the scientific requirements for quality model codes which have satisfactory performance and portability, while simultaneously supporting productive scientific evolution. We assert that the existing method of incremental model improvements employing small steps which adjust to the changing hardware environment is likely to be inadequate for crossing the chasm between aspiration and hardware at a satisfactory pace, in part because institutions cannot have all the relevant expertise in house. Instead, we outline a methodology based on large community efforts in engineering and standardisation, which will depend on identifying a taxonomy of key activities – perhaps based on existing efforts to develop domain-specific languages, identify common patterns in weather and climate codes, and develop community approaches to commonly needed tools and libraries – and then collaboratively building up those key components. Such a collaborative approach will depend on institutions, projects, and individuals adopting new interdependencies and ways of working.ISSN:1991-9603ISSN:1991-959