249 research outputs found

    Geotechnical hazard representation for seismic risk analysis

    Get PDF
    Seismic risk analysis, either deterministic or probabilistic, along with the use of a GIS-environment to represent the results, are helpful tools to support the decision making for planning and prioritizing seismic risk management strategies. This paper focuses on the importance of an appropriate geotechnical hazard representation within a seismic risk analysis process. An overview of alternative methods for geotechnical zonation available in the literature, with a different level of refinement depending on the information available, is provided. It is worth noting that in such methods, the definition of the site effect amplifications does not account for the characteristics of the built environment, affecting the soil-structure interaction. Alternative methods able to account for either the soil conditions and the characteristics of the built environment have been recently proposed and are herein discussed. Within a framework for seismic risk analysis, different formulations would thus derive depending on both the intensity measure and the vulnerability approach adopted. In conclusion, an immediate visualization of the importance of the geotechnical hazard evaluation within a seismic risk analysis is provided in terms of the variation of the expected damage and consequence distribution with reference to a case-stud

    Beyond marketization? The genesis of quality assessment criteria in the British university sector between 1985 and 1992

    Get PDF
    This brief article is a consideration on the theme of Quality Assessment (QA) criteria in the British university sector. It will attempt to shed light on a variety of conceptualisations on this instrument which is commonly considered a means to achieve a more efficient university sector which can prosper by working on the quality of education that is delivered to students whilst at the same time responding to reforms, which have pushed for a greater “value-for-money” of public resources used by universities. Whereas the majority of recent literature give an overview of the current state of the university system, this article instead sheds a focus of quality assessment criteria on an aspect which is treated as a marginal detail, namely, the historical origins of these criteria. By bringing some historical evidence to the forefront, this article will show how an attentive reflection on the birth of quality assessment criteria can show some problematic aspects of literatures which tend to explain and study QA as instruments that contain a logic of some sort. When literatures of different approaches prioritise a logic at work for explaining the functioning of QA criteria, this article argues, they tend to ignore that the conception of QA was vested with a variety of interests of different agents

    Evaluating desktop methods for assessing liquefaction-induced damage to infrastructure for the insurance sector

    Get PDF
    The current method used by insurance catastrophe models to account for liquefaction simply applies a factor to shaking-induced losses based on liquefaction susceptibility. There is a need for more sophisticated methods but they must be compatible with the data and resource constraints that insurers have to work with. This study compares five models: liquefaction potential index (LPI) calculated from shear-wave velocity; two implementations of the HAZUS software methodology; and two models based on USGS remote sensing data. Data from the September 2010 and February 2011 Canterbury (New Zealand) earthquakes is used to compare observed liquefaction occurrences to predictions from these models using binary classification performance measures. The analysis shows that the best performing model is LPI although the correlation with observations is only moderate and statistical techniques for binary classification models indicate that the model is biased towards positive predictions of liquefaction occurrence

    Seismic performance of buried electrical cables: evidence-based repair rates and fragility functions

    Get PDF
    The fragility of buried electrical cables is often neglected in earthquakes but significant damage to cables was observed during the 2010–2011 Canterbury earthquake sequence in New Zealand. This study estimates Poisson repair rates, similar to those in existence for pipelines, using damage data retrieved from part of the electric power distribution network in the city of Christchurch. The functions have been developed separately for four seismic hazard zones: no liquefaction, all liquefaction effects, liquefaction-induced settlement only, and liquefaction-induced lateral spread. In each zone six different intensity measures (IMs) are tested, including peak ground velocity as a measure of ground shaking and five metrics of permanent ground deformation: vertical differential, horizontal, maximum, vector mean and geometric mean. The analysis confirms that the vulnerability of buried cables is influenced more by liquefaction than by ground shaking, and that lateral spread causes more damage than settlement alone. In areas where lateral spreading is observed, the geometric mean permanent ground deformation is identified as the best performing IM across all zones when considering both variance explained and uncertainty. In areas where only settlement is observed, there is only a moderate correlation between repair rate and vertical differential permanent ground deformation but the estimated model error is relatively small and so the model may be acceptable. In general, repair rates in the zone where no liquefaction occurred are very low and it is possible that repairs present in this area result from misclassification of hazard observations, either in the raw data or due to the approximations of the geospatial analysis. Along with hazard intensity, insulation material is identified as a critical factor influencing cable fragility, with paper-insulated lead covered armoured cables experiencing considerably higher repair rates than cross-linked polyethylene cables. The analysis shows no trend between cable age and repair rates and the differences in repair rates between conducting materials is shown not to be significant. In addition to repair rate functions, an example of a fragility curve suite for cables is presented, which may be more useful for analysis of network connectivity where cable functionality is of more interest than the number of repairs. These functions are one of the first to be produced for the prediction of damage to buried cables

    New Zealand contributions to the global earthquake model’s earthquake consequences database (GEMECD)

    Get PDF
    The Global Earthquake Model’s (GEM) Earthquake Consequences Database (GEMECD) aims to develop, for the first time, a standardised framework for collecting and collating geocoded consequence data induced by primary and secondary seismic hazards to different types of buildings, critical facilities, infrastructure and population, and relate this data to estimated ground motion intensity via the USGS ShakeMap Atlas. New Zealand is a partner of the GEMECD consortium and to-date has contributed with 7 events to the database, of which 4 are localised in the South Pacific area (Newcastle 1989; Luzon 1990; South of Java 2006 and Samoa Islands 2009) and 3 are NZ-specific events (Edgecumbe 1987; Darfield 2010 and Christchurch 2011). This contribution to GEMECD represented a unique opportunity for collating, comparing and reviewing existing damage datasets and harmonising them into a common, openly accessible and standardised database, from where the seismic performance of New Zealand buildings can be comparatively assessed. This paper firstly provides an overview of the GEMECD database structure, including taxonomies and guidelines to collect and report on earthquake-induced consequence data. Secondly, the paper presents a summary of the studies implemented for the 7 events, with particular focus on the Darfield (2010) and Christchurch (2011) earthquakes. Finally, examples of specific outcomes and potentials for NZ from using and processing GEMECD are presented, including: 1) the rationale for adopting the GEM taxonomy in NZ and any need for introducing NZ-specific attributes; 2) a complete overview of the building typological distribution in the Christchurch CBD prior to the Canterbury earthquakes and 3) some initial correlations between the level and extent of earthquake-induced physical damage to buildings, building safety/accessibility issues and the induced human casualtie

    Evaluating Simplified Methods for Liquefaction Assessment for Loss Estimation

    Get PDF
    Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at optimal thresholds. This paper also considers two models (HAZUS and EPOLLS) for estimation of the scale of liquefaction in terms of permanent ground deformation but finds that both models perform poorly, with correlations between observations and forecasts lower than 0.4 in all cases. Therefore these models potentially provide negligible additional value to loss estimation analysis outside of the regions for which they have been developed

    A Mass-Magnitude Relation for Low-mass Stars Based on Dynamical Measurements of Thousands of Binary Star Systems

    Full text link
    Stellar mass is a fundamental parameter that is key to our understanding of stellar formation and evolution, as well as the characterization of nearby exoplanet companions. Historically, stellar masses have been derived from long-term observations of visual or spectroscopic binary star systems. While advances in high-resolution imaging have enabled observations of systems with shorter orbital periods, stellar mass measurements remain challenging, and relatively few have been precisely measured. We present a new statistical approach to measuring masses for populations of stars. Using Gaia astrometry, we analyze the relative orbital motion of >3,800>3,800 wide binary systems comprising low-mass stars to establish a Mass-Magnitude relation in the Gaia GRPG_\mathrm{RP} band spanning the absolute magnitude range 14.5>MGRP>4.014.5>M_{G_\mathrm{RP}}>4.0, corresponding to a mass range of 0.080.08~M⊙â‰ČMâ‰Č1.0_{\odot}\lesssim M\lesssim1.0~M⊙_{\odot}. This relation is directly applicable to >30>30 million stars in the Gaia catalog. Based on comparison to existing Mass-Magnitude relations calibrated for 2MASS KsK_{s} magnitudes, we estimate that the internal precision of our mass estimates is ∌\sim10%\%. We use this relation to estimate masses for a volume-limited sample of ∌\sim18,200 stars within 50~pc of the Sun and the present-day field mass function for stars with Mâ‰Č1.0M\lesssim 1.0~M⊙_{\odot}, which we find peaks at 0.16~M⊙_{\odot}. We investigate a volume-limited sample of wide binary systems with early K dwarf primaries, complete for binary mass ratios q>0.2q>0.2, and measure the distribution of qq at separations >100>100~au. We find that our distribution of qq is not uniformly distributed, rather decreasing towards q=1.0q=1.0.Comment: 13 pages, 8 figure

    The vulnerability assessment of current buildings by a macroseismic approach derived from the EMS-98 scale

    Get PDF
    A hierarchical family of Damage Probability Matrices (DPM) has been derived in this paper from the ones implicitly contained in the EMS-98 Macroseismic Scale for 6 vulnerability classes. To this aim the linguistic definitions provided by the scale, and the associated fuzzy sub-sets of the percentage of buildings, have been completed according to reliable hypotheses. A parametric representation of the corresponding cumulative probability distributions is moreover provided, through a unique parameter: a vulnerability index variable in the range from 0 to 1 and independent of the macroseismic intensity. Finally, an innovative macroseismic approach allowing the vulnerability analysis of building typologies is defined within the European Macroseismic Scale (EMS-98) and qualitatively related to the vulnerability classes. Bayes’ theorem allows the upgrading of the frequencies when further data about the built-environment or specific properties of the buildings are available, allowing the identification of a different behaviours with respect to the one generally considered for the typology. Fuzzy measures of any damage function can be derived, using parametric or nonparametric damage probability matrices. For every result of the seismic analysis, the procedure allows supply to the user of the final uncertainty connected with the aforementioned fuzzy relation between the probability of the damage grade, the macroseismic intensity and the vulnerability classes

    The Vulnerability Assessment and the Damage Scenario in Seismic Risk Analysis

    Get PDF
    In this Ph.D thesis two methods for the vulnerability assessment of built-up area have been proposed: a macroseismic model, to be used with macroseismic intensity hazard maps, and a mechanical based model, to be applied when the hazard is provided in terms of peak ground accelerations and spectral values. The thesis illustrates the theoretical bases and provides the defining parameters of the two proposed methods for different masonry and reinforced concrete building typologies. The implementation of the two proposed methods for the estimation of the expected economic losses and of the consequences to people and to buildings, in terms of distributions or fragility curves, is, moreover, illustrated. The methods can be employed either with properly surveyed data or with statistical existent data of different origin and quality. A different uncertainty characterises the vulnerability assessment and the consequent damage evaluation depending on the quantity and on the quality of the data available for the analysis. Thanks to the clear analytical definition of the proposed methods, they can be easily implemented in a GIS environment: there, crossing the hazard and the vulnerability analyses, the evaluation of damage scenarios becomes an obvious following step. The use of the proposed seismic risk analysis procedures for risk management purposes becomes, therefore, very effective. The possibility of a constant updating of data and the rather fast computational process, allows decision makers to construct simply different scenarios testing the effectiveness of different set of mitigation strategies. The opportunity to draw real time scenarios of the likely impact of an earthquake can be useful to make risk decisions during the first hours following the event.Diese Doktorarbeit stellt zwei AnsĂ€tze fĂŒr eine VulnerabilitĂ€tsbewertung im bebauten Raum vor: ein makroseismisches Modell, das in Verbindung mit makroseismischen IntensitĂ€tskarten eingesetzt wird und ein mechanisches Modell, das dann zum Einsatz kommt, wenn eine GefĂ€hrdung in Bezug auf Spitzen-Bodenbeschleunigungen und Spektralwerte vorliegt. Die Arbeit zeigt die theoretischen Grundlagen auf und verweist auf die bestimmenden Parameter fĂŒr die beiden vorgestellten Modelle im Hinblick auf unterschiedliche Mauerwerks- und Stahlbetonbauwerke. DarĂŒber hinaus geht die Arbeit auf die Anwendung der beiden vorgestellten Modelle zur Bestimmung des zu erwartenden wirtschaftlichen Verlustes und der Auswirkung fĂŒr Menschen und Bauwerk unter Zuhilfenahme von Verteilungs- oder FragilitĂ€tskurven ein. Die AnsĂ€tze gehen entweder von Vermessungsdaten aus oder von statistischen Daten unterschiedlicher Herkunft und QualitĂ€t. Unterschiede in den Unsicherheiten wirken sich je nach QuantitĂ€t und QualitĂ€t der fĂŒr die Analyse zur VerfĂŒgung stehenden Daten auf die VulnerabilitĂ€tsbewertung und die daraus folgende Schadensbewertung aus. Aufgrund einer klaren analytischen Definition der vorgestellten Methoden lassen sie sich ohne Weiteres in einer GIS-Umgebung einsetzen: hierbei wird durch Überschneidung der Gefahren- und VulnerabilitĂ€tsanalysen die Bewertung von Schadensszenarien zu einem logischen Folgeschritt. Die vorgestellten Analyseverfahren fĂŒr die Beherrschung seismischer Risiken lassen sich somit effektiv einsetzen. Durch kontinuierliche Datenaktualisierung und das sehr schnelle Rechenverfahren steht EntscheidungstrĂ€gern ein Weg zur AbschĂ€tzung der Wirksamkeit unterschiedlicher Maßnahmenszenarien zur VerfĂŒgung. Die Möglichkeit, Realtime-Szenarien fĂŒr die wahrscheinliche Auswirkung eines Erdbebens durchzuspielen, kann fĂŒr eine Entscheidungsfindung wĂ€hrend der ersten Stunden nach dem Eintreten eines Erdbebenereignisses sehr hilfreich sein

    Extreme Events Decision Making in Transport Networks: A Holistic Approach Using Emergency Scenarios and Decision Making Theory

    Get PDF
    This paper proposes a novel method to analyse decision-making during extreme events. The method is based on Decision-making Theory and aims at understanding how emergency managers make decisions during disasters. A data collection framework and an analysis method were conceptualized to capture participant’s behaviour, perception and understanding throughout a game-board simulation exercise, which emulates an earthquake disaster scenario affecting transport systems. The method evaluates the participant’s actions in order to identify decision-making patterns, strengths and weaknesses. A set of case studies has shown two typical patterns, namely: a) Support immediate rescue; b) Support lifelines recovery. Good decision-making practices regard to objective-oriented decision making, understanding of conflicting priorities and appropriate resource management. Weaknesses are associated with comprehending relationships between community/environment and projecting future scenarios. Overall, the case study’s results demonstrate the efficiency and robustness of the proposed method to analyse decision making during disasters
    • 

    corecore