3,555 research outputs found

    Development of an open-source platform for calculating losses from earthquakes

    Get PDF
    Risk analysis has a critical role in the reduction of casualties and damages due to earthquakes. Recognition of this relation has led to a rapid rise in demand for accurate, reliable and flexible risk assessment numerical tools and software. As a response to this need, the Global Earthquake Model (GEM) started the development of an open source platform called OpenQuake, for calculating seismic hazard and risk at different scales. Along with this framework, also several other tools to support users creating their own models and visualizing their results are currently being developed, and will be made available as a Modelers Tool Kit (MTK). In this paper, a description of the architecture of OpenQuake is provided, highlighting the current data model, workflow of the calculators and the main challenges raised when running this type of calculations in a global scale. In addition, a case study is presented using the Marmara Region (Turkey) for the calculations, in which the losses for a single event are estimated, as well as probabilistic risk for a 50 years time span

    Developing a global risk engine

    Get PDF
    Risk analysis is a critical link in the reduction of casualties and damages due to earthquakes. Recognition of this relation has led to a rapid rise in demand for accurate, reliable and flexible risk assessment software. However, there is a significant disparity between the high quality scientific data developed by researchers and the availability of versatile, open and user-friendly risk analysis tools to meet the demands of end-users. In the past few years several open-source software have been developed that play an important role in the seismic research, such as OpenSHA and OpenSEES. There is however still a gap when it comes to open-source risk assessment tools and software. In order to fill this gap, the Global Earthquake Model (GEM) has been created. GEM is an internationally sanctioned program initiated by the OECD that aims to build independent, open standards to calculate and communicate earthquake risk around the world. This initiative started with a one-year pilot project named GEM1, during which an evaluation of a number of existing risk software was carried out. After a critical review of the results it was concluded that none of the software were adequate for GEM requirements and therefore, a new object-oriented tool was to be developed. This paper presents a summary of some of the most well known applications used in risk analysis, highlighting the main aspects that were considered for the development of this risk platform. The research that was carried out in order to gather all of the necessary information to build this tool was distributed in four different areas: information technology approach, seismic hazard resources, vulnerability assessment methodologies and sources of exposure data. The main aspects and findings for each of these areas will be presented as well as how these features were incorporated in the up-to-date risk engine. Currently, the risk engine is capable of predicting human or economical losses worldwide considering both deterministic and probabilistic-based events, using vulnerability curves. A first version of GEM will become available at the end of 2013. Until then the risk engine will continue to be developed by a growing community of developers, using a dedicated open-source platform

    Evaluation of analytical methodologies to derive vulnerability functions

    Get PDF
    The recognition of fragility functions as a fundamental tool in seismic risk assessment has led to the development of more and more complex and elaborate procedures for their computation. Although vulnerability functions have been traditionally produced using observed damage and loss data, more recent studies propose the employment of analytical methodologies as a way to overcome the frequent lack of post-earthquake data. The variation of the structural modelling approaches on the estimation of building capacity has been the target of many studies in the past, however, its influence in the resulting vulnerability model, impact in loss estimations or propagation of the uncertainty to the seismic risk calculations has so far been the object of restricted scrutiny. Hence, in this paper, an extensive study of static and dynamic procedures for estimating the nonlinear response of buildings has been carried out in order to evaluate the impact of the chosen methodology on the resulting vulnerability and risk outputs. Moreover, the computational effort and numerical stability provided by each approach were evaluated and conclusions were obtained regarding which one offers the optimal balance between accuracy and complexity

    Extending displacement-based earthquake loss assessment (DBELA) for the computation of fragility curves

    Get PDF
    This paper presents a new procedure to derive fragility functions for populations of buildings that relies on the displacement-based earthquake loss assessment (DBELA) methodology. In the method proposed herein, thousands of synthetic buildings have been produced considering the probabilistic distribution describing the variability in geometrical and material properties. Then, their nonlinear capacity has been estimated using the DBELA method and their response against a large set of ground motion records has been estimated. Global limit states are used to estimate the distribution of buildings in each damage state for different levels of ground motion, and a regression algorithm is applied to derive fragility functions for each limit state. The proposed methodology is demonstrated for the case of ductile and non-ductile Turkish reinforced concrete frames with masonry infills

    Processor Generator v1.3 (PG13)

    Get PDF
    This project presents a novel automated framework for microprocessor instruction set exploration that allows users to extend a basic MIPS ISA with new multimedia instructions (including custom vector instructions, a la AltiVec and MMX/SSE). The infrastructure provides users with an extension language that automatically incorporates extensions into a synthesizable processor pipeline model and an executable instruction set simulator. We implement popular AltiVec and MMX extensions using this framework and present experimental results that show significant performance gains of customized microprocessor

    A tradeoff in simultaneous quantum-limited phase and loss estimation in interferometry

    Full text link
    Interferometry with quantum light is known to provide enhanced precision for estimating a single phase. However, depending on the parameters involved, the quantum limit for the simultaneous estimation of multiple parameters may not attainable, leading to trade-offs in the attainable precisions. Here we study the simultaneous estimation of two parameters related to optical interferometry: phase and loss, using a fixed number of photons. We derive a trade-off in the estimation of these two parameters which shows that, in contrast to single-parameter estimation, it is impossible to design a strategy saturating the quantum Cramer-Rao bound for loss and phase estimation in a single setup simultaneously. We design optimal quantum states with a fixed number of photons achieving the best possible simultaneous precisions. Our results reveal general features about concurrently estimating Hamiltonian and dissipative parameters, and has implications for sophisticated sensing scenarios such as quantum imaging.Comment: 9 pages, 6 figure

    Advanced modulation technology development for earth station demodulator applications

    Get PDF
    The purpose of this contract was to develop a high rate (200 Mbps), bandwidth efficient, modulation format using low cost hardware, in 1990's technology. The modulation format chosen is 16-ary continuous phase frequency shift keying (CPFSK). The implementation of the modulation format uses a unique combination of a limiter/discriminator followed by an accumulator to determine transmitted phase. An important feature of the modulation scheme is the way coding is applied to efficiently gain back the performance lost by the close spacing of the phase points

    The Comparative Reactivity Method ā”€ a new tool to measure total OH Reactivity in ambient air

    Get PDF
    Hydroxyl (OH) radicals play a vital role in maintaining the oxidizing capacity of the atmosphere. To understand variations in OH radicals both source and sink terms must be understood. Currently the overall sink term, or the total atmospheric reactivity to OH, is poorly constrained. Here, we present a new on-line method to directly measure the total OH reactivity (i.e.~total loss rate of OH radicals) in a sampled air mass. In this method, a reactive molecule (<i>X</i>), not normally present in air, is passed through a glass reactor and its concentration is monitored with a suitable detector. OH radicals are then introduced in the glass reactor at a constant rate to react with <i>X</i>, first in the presence of zero air and then in the presence of ambient air containing VOCs and other OH reactive species. Comparing the amount of <i>X</i> exiting the reactor with and without the ambient air allows the air reactivity to be determined. In our existing set up, <i>X</i> is pyrrole and the detector used is a proton transfer reaction mass spectrometer. The present dynamic range for ambient air reactivity is about 6 to 300 s<sup>&minus;1</sup>, with an overall maximum uncertainty of 25% above 8 s<sup>&minus;1</sup> and up to 50% between 6&ndash;8 s<sup>&minus;1</sup>. The system has been tested and calibrated with different single and mixed hydrocarbon standards showing excellent linearity and accountability with the reactivity of the standards. Potential interferences such as high NO in ambient air, varying relative humidity and photolysis of pyrrole within the setup have also been investigated. While interferences due changing humidity and photolysis of pyrrole are easily overcome by ensuring that humidity in the set up does not change drastically and the photolytic loss of pyrrole is measured and taken into account, respectively, NO>10 ppb in ambient air remains a significant interference for the current configuration of the instrument. Field tests in the tropical rainforest of Suriname (~53 s<sup-1</sup>) and the urban atmosphere of Mainz (~10 s<sup>-1</sup>) Germany, show the promise of the new method and indicate that a significant fraction of OH reactive species in the tropical forests is likely missed by current measurements. Suggestions for improvements to the technique and future applications are discussed

    Physicochemical characterisation of protein ingredients prepared from milk by ultrafiltration or microfiltration for application in formulated nutritional products

    Get PDF
    Formulated food systems are becoming more sophisticated as demand grows for the design of structural and nutritional profiles targeted at increasingly specific demographics. Milk protein is an important bio- and techno-functional component of such formulations, which include infant formula, sports supplements, clinical beverages and elderly nutrition products. This thesis outlines research into ingredients that are key to the development of these products, namely milk protein concentrate (MPC), milk protein isolate (MPI), micellar casein concentrate (MCC), Ī²-casein concentrate (BCC) and serum protein concentrate (SPC). MPC powders ranging from 37 to 90% protein (solids basis) were studied for properties of relevance to handling and storage of powders, powder solubilisation and thermal processing of reconstituted MPCs. MPC powders with ā‰„80% protein were found to have very poor flowability and high compressibility; in addition, these high-protein MPCs exhibited poor wetting and dispersion characteristics during rehydration in water. Heat stability studies on unconcentrated (3.5%, 140Ā°C) and concentrated (8.5%, 120Ā°C) MPC suspensions, showed that suspensions prepared from high-protein MPCs coagulated much more rapidly than lower protein MPCs. Ī²-casein ingredients were developed using membrane processing. Enrichment of Ī²-casein from skim milk was performed at laboratory-scale using ā€˜coldā€™ microfiltration (MF) at <4Ā°C with either 1000 kDa molecular weight cut-off or 0.1 Āµm pore-size membranes. At pilot-scale, a second ā€˜warmā€™ MF step at 26Ā°C was incorporated for selective purification of micellised Ī²-casein from whey proteins; using this approach, BCCs with Ī²-casein purity of up to 80% (protein basis) were prepared, with the whey protein purity of the SPC co-product reaching ~90%. The BCC ingredient could prevent supersaturated solutions of calcium phosphate (CaP) from precipitating, although the amorphous CaP formed created large micelles that were less thermo-reversible than those in CaP-free systems. Another co-product of BCC manufacture, MCC powder, was shown to have superior rehydration characteristics compared to traditional MCCs. The findings presented in this thesis constitute a significant advance in the research of milk protein ingredients, in terms of optimising their preparation by membrane filtration, preventing their destabilisation during processing and facilitating their effective incorporation into nutritional formulations designed for consumers of a specific age, lifestyle or health statu
    • ā€¦
    corecore