16,178 research outputs found

    Towards a dynamic earthquake risk framework for Switzerland

    Get PDF
    Scientists from different disciplines at ETH Zurich are developing a dynamic, harmonised, and user-centred earthquake risk framework for Switzerland, relying on a continuously evolving earthquake catalogue generated by the Swiss Seismological Service (SED) using the national seismic networks. This framework uses all available information to assess seismic risk at various stages and facilitates widespread dissemination and communication of the resulting information. Earthquake risk products and services include operational earthquake (loss) forecasting (OE(L)F), earthquake early warning (EEW), ShakeMaps, rapid impact assessment (RIA), structural health monitoring (SHM), and recovery and rebuilding efforts (RRE). Standardisation of products and workflows across various applications is essential for achieving broad adoption, universal recognition, and maximum synergies. In the Swiss dynamic earthquake risk framework, the harmonisation of products into seamless solutions that access the same databases, workflows, and software is a crucial component. A user-centred approach utilising quantitative and qualitative social science tools like online surveys and focus groups is a significant innovation featured in all products and services. Here we report on the key considerations and developments of the framework and its components. This paper may serve as a reference guide for other countries wishing to establish similar services for seismic risk reduction.</p

    Risk mitigation decisions for it security

    Get PDF
    Enterprises must manage their information risk as part of their larger operational risk management program. Managers must choose how to control for such information risk. This article defines the flow risk reduction problem and presents a formal model using a workflow framework. Three different control placement methods are introduced to solve the problem, and a comparative analysis is presented using a robust test set of 162 simulations. One year of simulated attacks is used to validate the quality of the solutions. We find that the math programming control placement method yields substantial improvements in terms of risk reduction and risk reduction on investment when compared to heuristics that would typically be used by managers to solve the problem. The contribution of this research is to provide managers with methods to substantially reduce information and security risks, while obtaining significantly better returns on their security investments. By using a workflow approach to control placement, which guides the manager to examine the entire infrastructure in a holistic manner, this research is unique in that it enables information risk to be examined strategically. © 2014 ACM

    Enhancing Energy Production with Exascale HPC Methods

    Get PDF
    High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.The research leading to these results has received funding from the European Union's Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement n° 689772, the Spanish Ministry of Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and from the Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the Intel Corporation, which enabled us to obtain the presented experimental results in uncertainty quantification in seismic imagingPostprint (author's final draft

    Towards business integration as a service 2.0 (BIaaS 2.0)

    Get PDF
    Cloud Computing Business Framework (CCBF) is a framework for designing and implementation of Could Computing solutions. This proposal focuses on how CCBF can help to address linkage in Cloud Computing implementations. This leads to the development of Business Integration as a Service 1.0 (BIaaS 1.0) allowing different services, roles and functionalities to work together in a linkage-oriented framework where the outcome of one service can be input to another, without the need to translate between domains or languages. BIaaS 2.0 aims to allow automation, enhanced security, advanced risk modelling and improved collaboration between processes in BIaaS 1.0. The benefits from adopting BIaaS 1.0 and developing BIaaS 2.0 are illustrated using a case study from the University of Southampton and several collaborators including IBM US. BIaaS 2.0 can work with mainstream technologies such as scientific workflows, and the proposal and demonstration of BIaaS 2.0 will be aimed to certainly benefit industry and academia. © 2011 IEEE

    Towards Business Integration as a Service 2.0

    No full text
    Cloud Computing Business Framework (CCBF) is a framework for designing and implementation of Could Computing solutions. This proposal focuses on how CCBF can help to address linkage in Cloud Computing implementations. This leads to the development of Business Integration as a Service 1.0 (BIaS 1.0) allowing different services, roles and functionalities to work together in a linkage-oriented framework where the outcome of one service can be input to another, without the need to translate between domains or languages. BIaS 2.0 aims to allow full automation, enhanced security, advanced risk modelling and improved collaboration between processes in BIaaS 1.0. The benefits from adopting BIaS 1.0 and developing BIaS 2.0 are illustrated using a case study from the University of Southampton and several collaborators including IBM US. BIaS 2.0 can work with mainstream technologies such as scientific workflows, and the proposal and demonstration of BIaaS 2.0 will certainly benefit industry and academia

    The case for cloud service trustmarks and assurance-as-a-service

    Get PDF
    Cloud computing represents a significant economic opportunity for Europe. However, this growth is threatened by adoption barriers largely related to trust. This position paper examines trust and confidence issues in cloud computing and advances a case for addressing them through the implementation of a novel trustmark scheme for cloud service providers. The proposed trustmark would be both active and dynamic featuring multi-modal information about the performance of the underlying cloud service. The trustmarks would be informed by live performance data from the cloud service provider, or ideally an independent third-party accountability and assurance service that would communicate up-to-date information relating to service performance and dependability. By combining assurance measures with a remediation scheme, cloud service providers could both signal dependability to customers and the wider marketplace and provide customers, auditors and regulators with a mechanism for determining accountability in the event of failure or non-compliance. As a result, the trustmarks would convey to consumers of cloud services and other stakeholders that strong assurance and accountability measures are in place for the service in question and thereby address trust and confidence issues in cloud computing

    Autonomic care platform for optimizing query performance

    Get PDF
    Background: As the amount of information in electronic health care systems increases, data operations get more complicated and time-consuming. Intensive Care platforms require a timely processing of data retrievals to guarantee the continuous display of recent data of patients. Physicians and nurses rely on this data for their decision making. Manual optimization of query executions has become difficult to handle due to the increased amount of queries across multiple sources. Hence, a more automated management is necessary to increase the performance of database queries. The autonomic computing paradigm promises an approach in which the system adapts itself and acts as self-managing entity, thereby limiting human interventions and taking actions. Despite the usage of autonomic control loops in network and software systems, this approach has not been applied so far for health information systems. Methods: We extend the COSARA architecture, an infection surveillance and antibiotic management service platform for the Intensive Care Unit (ICU), with self-managed components to increase the performance of data retrievals. We used real-life ICU COSARA queries to analyse slow performance and measure the impact of optimizations. Each day more than 2 million COSARA queries are executed. Three control loops, which monitor the executions and take action, have been proposed: reactive, deliberative and reflective control loops. We focus on improvements of the execution time of microbiology queries directly related to the visual displays of patients' data on the bedside screens. Results: The results show that autonomic control loops are beneficial for the optimizations in the data executions in the ICU. The application of reactive control loop results in a reduction of 8.61% of the average execution time of microbiology results. The combined application of the reactive and deliberative control loop results in an average query time reduction of 10.92% and the combination of reactive, deliberative and reflective control loops provides a reduction of 13.04%. Conclusions: We found that by controlled reduction of queries' executions the performance for the end-user can be improved. The implementation of autonomic control loops in an existing health platform, COSARA, has a positive effect on the timely data visualization for the physician and nurse

    Engineering Workflow: The Process in Product Data Technology

    Get PDF
    The prevailing paradigm for enterprises in the new decade is undoubtedly speed. This enterprise view is driven by the availability of e-business technology that enables new forms of collaboration between companies. The rapid developments in e-business also have an impact on the future of engineering organizations. This paper focuses on the early phases of a product’s life cycle, i.e. between initial concept and release to manufacturing. New engineering workflow capabilities are presented, that have been tailored to speed up the engineering of new products
    • 

    corecore