116 research outputs found

    SPH-EXA: Enhancing the Scalability of SPH codes Via an Exascale-Ready SPH Mini-App

    Full text link
    Numerical simulations of fluids in astrophysics and computational fluid dynamics (CFD) are among the most computationally-demanding calculations, in terms of sustained floating-point operations per second, or FLOP/s. It is expected that these numerical simulations will significantly benefit from the future Exascale computing infrastructures, that will perform 10^18 FLOP/s. The performance of the SPH codes is, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. In this work an extensive study of three SPH implementations SPHYNX, ChaNGa, and XXX is performed, to gain insights and to expose any limitations and characteristics of the codes. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. We implemented a rotating square patch as a joint test simulation for the three SPH codes and analyzed their performance on a modern HPC system, Piz Daint. The performance profiling and scalability analysis conducted on the three parent codes allowed to expose their performance issues, such as load imbalance, both in MPI and OpenMP. Two-level load balancing has been successfully applied to SPHYNX to overcome its load imbalance. The performance analysis shapes and drives the design of the SPH-EXA mini-app towards the use of efficient parallelization methods, fault-tolerance mechanisms, and load balancing approaches.Comment: arXiv admin note: substantial text overlap with arXiv:1809.0801

    Principles for automated and reproducible benchmarking

    Get PDF
    The diversity in processor technology used by High Performance Computing (HPC) facilities is growing, and so applications must be written in such a way that they can attain high levels of performance across a range of different CPUs, GPUs, and other accelerators. Measuring application performance across this wide range of platforms becomes crucial, but there are significant challenges to do this rigorously, in a time efficient way, whilst assuring results are scientifically meaningful, reproducible, and actionable. This paper presents a methodology for measuring and analysing the performance portability of a parallel application and shares a software framework which combines and extends adopted technologies to provide a usable benchmarking tool. We demonstrate the flexibility and effectiveness of the methodology and benchmarking framework by showcasing a variety of benchmarking case studies which utilise a stable of supercomputing resources at a national scale

    Best practices and QA protocols for code development

    Get PDF
    The OperaHPC project aims to improve the numerical capabilities of 3D fuel performance modelling as part of its strategic objectives. To achieve this goal, an open-source approach has been chosen for the tools developed in the framework of the project, namely MMM and OFFBEAT, the latter coupled to the SCIANTIX code. As the open-source approach is relatively new in the domain of nuclear safety studies, this document presents a framework for achieving quality assurance targets for the open-source scientific computing tools within the OperaHPC project. First, the document provides a brief review of the most common QA programs and standards employed in the field, with a particular focus to the aspects that are more relevant to OperaHPC. Then, it discusses modern software development practices to improve code quality, highlighting the importance of revision control systems, testing methodologies, and documentation. Finally, it describes the concept of governance model for regulating interactions between contributors, users, and decision-makers. The framework presented in this document provides a backbone for the verification and validation actions that will be carried out within the project and contributes to the qualification of the MMM, OFFBEAT and SCIANTIX tools for nuclear safety studies

    European HPC Landscape

    Get PDF
    This paper provides an overview on the European HPC landscape supported by a survey, designed by the PRACE-5IP project, accessing more than 50 of the most influential stakeholders of HPC in Europe. It focuses at Tier-0 systems on the European level providing high-end computing and data analysis resources. The different actors are presented and their provided services are analyzed in order to identify overlaps and gaps, complementarity and opportunities for collaborations. A new pan-European HPC portal is proposed in order to get all information on one place and facilitate access to the portfolio of services offered by the European HPC communities

    A Framework To Develop Business Models For The Exploitation Of Disruptive Technology

    Get PDF
    Adopting new technology to expand business prospects is not a new trend. Certainly, this brings innovation and new opportunities to the business but also raises several challenges. This research addresses the challenges of business modelling in relation to disruptive technologies. Emerging technologies are very dynamic, resulting in continuous new developments. Therefore, businesses need to adjust their business models to stay sustained with this dynamic nature of technology. This research aims to create a conceptual framework and a related methodology to develop business models for the commercial use of disruptive technologies. The research evaluates the gaps in the major business model development methodologies and argues that these methodologies are not adequate for businesses that offer high-end products and services to their customers. It creates a framework to make a methodical comparison among different business model methodologies. Based on that framework, it conducts a systematic comparison of five significant business model development methodologies to identify possible flaws. It analyses business elements of two use cases, where a disruptive technology, in this case, cloud computing in the form of cloud-based simulation, offers significant value to customers. Thereafter, it compares the components of all the five identified methodologies with each other using business elements of the selected use case. While the analysis highlights the differences and the similarities between the methodologies, it also reveals the limitations of the current approaches and the need for further decomposing technological elements. Therefore, the study carries out an empirical investigation based on selective sampling. Seven real-life business use cases that execute the application of disruptive technology (i.e., cloud/HPC-based simulation as a solution based on cloud computing & high-performance computing) have been explored, involving 30 individual companies. Thenceforth, a thematic analysis of these use cases, based on a detailed report provided by a European research project, is conducted. Besides, three months of observation is carried out by participating in the same project as a ‘Research Associate’ from the period of July 2019 to September 2019. This three-month observation supports not only providing access to 26 business use cases and their relevant documents but also validating the information provided, as well as finding clarity in collected data. Moreover, the selected business use cases are particularly useful for identifying the technology elements that are required to create the proposed framework. The analysis has resulted in an understanding of the dynamics of the interrelationship of social and technical factors for developing new technological solutions that push the development of new business models devised for delivering solutions exploiting disruptive technologies. Based on this understanding, the research extends a widely used business model ontology (Osterwalder’s Business Model Ontology), and offers a new business model methodology with the introduction of new business model elements related to technology. The technological elements are being identified as the results of the above empirical analysis. Utilising this extended ontology, a novel methodology for developing business models for the exploitation of disruptive technologies is suggested and its applicability is demonstrated in the example of cloud-based simulation case studies. The research creates three main contributions. Firstly, it uses a systematic approach and identifies that the technological elements are not explicitly defined in the analysed business model methodologies, as well as the factors of disruption in the context of the socio-materiality view is missing. Secondly, it conducts an empirical analysis and defines the specific social and technological elements such as ‘Dynamic Capabilities’, ‘Competition Network’, ‘Technology Type’, ‘Technology Infrastructure’, ‘Technology Platform’, and ‘Technology Network’; that are needed to create a new business model methodology. Finally, it extends an existing business model ontology (which was developed by Alexander Osterwalder) and constructs a new ontological framework with an accompanying methodology to develop business models, particularly for organisations that introduce technological solutions as their main value using disruptive technologies

    SLA-based trust model for secure cloud computing

    Get PDF
    Cloud computing has changed the strategy used for providing distributed services to many business and government agents. Cloud computing delivers scalable and on-demand services to most users in different domains. However, this new technology has also created many challenges for service providers and customers, especially for those users who already own complicated legacy systems. This thesis discusses the challenges of, and proposes solutions to, the issues of dynamic pricing, management of service level agreements (SLA), performance measurement methods and trust management for cloud computing.In cloud computing, a dynamic pricing scheme is very important to allow cloud providers to estimate the price of cloud services. Moreover, the dynamic pricing scheme can be used by cloud providers to optimize the total cost of cloud data centres and correlate the price of the service with the revenue model of service. In the context of cloud computing, dynamic pricing methods from the perspective of cloud providers and cloud customers are missing from the existing literature. A dynamic pricing scheme for cloud computing must take into account all the requirements of building and operating cloud data centres. Furthermore, a cloud pricing scheme must consider issues of service level agreements with cloud customers.I propose a dynamic pricing methodology which provides adequate estimating methods for decision makers who want to calculate the benefits and assess the risks of using cloud technology. I analyse the results and evaluate the solutions produced by the proposed scheme. I conclude that my proposed scheme of dynamic pricing can be used to increase the total revenue of cloud service providers and help cloud customers to select cloud service providers with a good quality level of service.Regarding the concept of SLA, I provide an SLA definition in the context of cloud computing to achieve the aim of presenting a clearly structured SLA for cloud users and improving the means of establishing a trustworthy relationship between service provider and customer. In order to provide a reliable methodology for measuring the performance of cloud platforms, I develop performance metrics to measure and compare the scalability of the virtualization resources of cloud data centres. First, I discuss the need for a reliable method of comparing the performance of various cloud services currently being offered. Then, I develop a different type of metrics and propose a suitable methodology to measure the scalability using these metrics. I focus on virtualization resources such as CPU, storage disk, and network infrastructure.To solve the problem of evaluating the trustworthiness of cloud services, this thesis develops a model for each of the dimensions for Infrastructure as a Service (IaaS) using fuzzy-set theory. I use the Takagi-Sugeno fuzzy-inference approach to develop an overall measure of trust value for the cloud providers. It is not easy to evaluate the cloud metrics for all types of cloud services. So, in this thesis, I use Infrastructure as a Service (IaaS) as a main example when I collect the data and apply the fuzzy model to evaluate trust in terms of cloud computing. Tests and results are presented to evaluate the effectiveness and robustness of the proposed model

    Applying system dynamics modelling to building resilient logistics : a case of the Humber Ports Complex

    Get PDF
    This research employs system dynamics modelling to analyse the structural behaviour of the interactions between Disaster Preparedness, Environment Instability, and Resilience in maritime logistics chain as a response to policy change, or strategic risk management interventions, at ports on the Humber Estuary.Port authorities, logistics operators, agencies, transporters, and researchers have revealed that disasters lead to interruptions in free flow of supply chains, and has the potential to disrupt the overall performance of a logistics chain. There is strong evidence about the rise in frequency, magnitude, and disruption potentials of catastrophic events in recent times (e.g. 9/11 attack, the Japanese earthquake/Tsunami and the aftermath nuclear disaster, Hurricanes Katrina and Haiyan, Super Storm Sandy, and many more). However, it appears that risk managers are not able to anticipate the outcomes of risk management decisions, and how those strategic interventions can affect the future of the logistics chain. Management appears to misjudge (or miscalculate) risks, perhaps due to the assumed complexity, the unpredictability of associated disruptions, and sometimes due to individual managerial approach to risk management. The uncertainties and states assumed notwithstanding, investors and regulators have become increasingly intolerant for risk mismanagement. Shipowners and port authorities tend to managing cost instead of managing risk. Hence they appear to invest little time and fewer resources in managing disruptions in their logistics chains even though they seem to frequently conduct risk assessments. We suggest that disaster preparedness that leads to resilience in maritime logistics chain is the best alternative to preventing or reducing the impacts of disruptions from catastrophes.We aim at improving current level of understanding the sources of disruptions in port/maritime logistics system through analysing the interdependencies between key variables. The dynamic models from this research have revealed that there is strong influence relationships (interdependencies) between Disaster Preparedness, Environment Instability, and Resilience. We found that potential sources of disruptions along the spokes of maritime logistics system can be port physics related, however the subtle triggering factors appear to be port size related. We also found that policy interventions geared towards risk management have the potential to produce unintended consequences basically due to unacknowledged conditions. Thus the relevance of the research and the SD models was to provide strategic policy makers with real-time decision evaluation tool that can provide justification for acceptance or rejection of a risk management intervention prior to decision implementation

    A Person-Centred Approach to Performance Measurement in the Health Svstem

    Get PDF
    Health systems strive to improve health outcomes in the populations they serve. In Australia, a national health system performance framework supports this aim. Review of performance measures showed a focus on organisational activity rather than outcomes for people. South Australia (SA) also set strategic targets for improved healthy life expectancy as influenced by: premature mortality; health related quality of life (HRQoL); and, potentially preventable hospitalisation (PPH). There are unmet information needs and capacity for improvement in the application of each of these measures. Aims This thesis aims to help inform system improvement by reorienting performance measurement toward outcomes of importance to people receiving healthcare – so called ‘person-centred’ measures. The thesis aims to provide empirical examples that help: i. Reframe premature mortality measures to account for survival time from disease detection until death; ii. Extend morbidity measurement to describe a person’s self-reported state of health; and, iii. Enhance enumeration of people experiencing PPH in emergency departments (EDs) and as admitted inpatients. Methods Four studies stem from the candidate’s projects in SA: monitoring summary population health; piloting an advanced cancer data system; steering the first Aboriginal specific population survey; and, quantifying individuals experiencing PPH. Study one introduces a new method that quantifies mortality related cancer burden using an example based on cancer registrations among Aboriginal and non-Aboriginal cohorts matched one-to-one on sex, year of birth, primary cancer site and year of diagnosis. Cancer burden is expressed as the PREmature Mortality to IncidencE Ratio (PREMIER), the ratio of years of life expectancy lost due to cancer against life expectancy years at risk at time of cancer diagnosis for each person. Study two presents the first, self-reported HRQoL utility results by Aboriginal South Australians. Population weighted HRQoL was measured using SF-6D and SF-12 version 2 in face-to-face interviews. Analyses describe relationships between HRQoL and respondent characteristics, and the characteristics of interviewees completing HRQoL questions. Studies three and four consider ED and inpatient PPH respectively. Those studies extend current reporting practices by shifting analyses from PPH as a proportion of activity, to a person-centred approach counting individuals experiencing PPH and the frequency of their events. Both studies draw on person-linked public hospital records within a period prevalence study design. Study three compares ED presentations among Refugee and Asylum Seeker Countries of birth (RASC); Aboriginal; those aged 75 years or more and all other adults. Study four determines disparities in rates, length of stay (LOS) and hospital costs of PPH for chronic conditions among Aboriginal and non-Aboriginal people. Results Study one included records for 777 Aboriginal people diagnosed with cancer from 1990 to 2010. Aboriginal people (n=777) had 57% (95%CI 52%-60%) more scope for improved cancer mortality outcomes two years after diagnosis compared to non-Aboriginal people of equivalent age, sex, diagnosis year and cancer site. PREMIER informs interventions by identifying people with greatest capacity to benefit from earlier detection, treatment and reduced premature mortality. Study two showed substantial variation in self-reported HRQoL among 399 Aboriginal people in 2010/11. For example, average SF-6D results varied from 0.82 (95%CIs 0.81-0.83) among those with no chronic conditions to 0.63 (95%CIs 0.59-0.67) where 3 or more conditions were reported. Comparatively less responding to HRQoL questions was evident among people speaking Aboriginal languages, in non-urban settings, and with multi-morbidities. Further developing culturally safe, self-reporting HRQoL instruments may improve participation by vulnerable and health compromised community members. Study three’s comparisons among adult residents attending EDs in 2005–2006 to 2010–2011 showed greatest disparities in GP-Type presentations among people from RASC compared to non-Aboriginal residents aged less than 75 years (423.7 and 240.1 persons per 1,000 population respectively). Study four’s inpatient PPH for chronic conditions showed Aboriginal people experienced more first-time events compared to others (11.5 and 6.2 per 1,000 persons per year respectively) and substantially longer, total length of stay (11.7 versus 9.0 days). Improved understanding of peoples’ PPH informs tailored services addressing primary healthcare needs. Conclusion The studies assembled in this thesis help align performance measurement with outcomes for people and provide support for system improvement and health reform. While the labour-intensive collaborations necessary may limit development, current opportunities for advancing research within government agencies are discussed. Australia’s health system performance measures remain underdeveloped. This thesis contributes to addressing that need by focussing attention on the people the system exists to serve – effectively, efficiently and equitably.Thesis (Ph.D.) -- University of Adelaide, School of Public Health, 202
    • …
    corecore