132 research outputs found

    Geotechnical hazard representation for seismic risk analysis

    Get PDF
    Seismic risk analysis, either deterministic or probabilistic, along with the use of a GIS-environment to represent the results, are helpful tools to support the decision making for planning and prioritizing seismic risk management strategies. This paper focuses on the importance of an appropriate geotechnical hazard representation within a seismic risk analysis process. An overview of alternative methods for geotechnical zonation available in the literature, with a different level of refinement depending on the information available, is provided. It is worth noting that in such methods, the definition of the site effect amplifications does not account for the characteristics of the built environment, affecting the soil-structure interaction. Alternative methods able to account for either the soil conditions and the characteristics of the built environment have been recently proposed and are herein discussed. Within a framework for seismic risk analysis, different formulations would thus derive depending on both the intensity measure and the vulnerability approach adopted. In conclusion, an immediate visualization of the importance of the geotechnical hazard evaluation within a seismic risk analysis is provided in terms of the variation of the expected damage and consequence distribution with reference to a case-stud

    Evaluating desktop methods for assessing liquefaction-induced damage to infrastructure for the insurance sector

    Get PDF
    The current method used by insurance catastrophe models to account for liquefaction simply applies a factor to shaking-induced losses based on liquefaction susceptibility. There is a need for more sophisticated methods but they must be compatible with the data and resource constraints that insurers have to work with. This study compares five models: liquefaction potential index (LPI) calculated from shear-wave velocity; two implementations of the HAZUS software methodology; and two models based on USGS remote sensing data. Data from the September 2010 and February 2011 Canterbury (New Zealand) earthquakes is used to compare observed liquefaction occurrences to predictions from these models using binary classification performance measures. The analysis shows that the best performing model is LPI although the correlation with observations is only moderate and statistical techniques for binary classification models indicate that the model is biased towards positive predictions of liquefaction occurrence

    Seismic performance of buried electrical cables: evidence-based repair rates and fragility functions

    Get PDF
    The fragility of buried electrical cables is often neglected in earthquakes but significant damage to cables was observed during the 2010–2011 Canterbury earthquake sequence in New Zealand. This study estimates Poisson repair rates, similar to those in existence for pipelines, using damage data retrieved from part of the electric power distribution network in the city of Christchurch. The functions have been developed separately for four seismic hazard zones: no liquefaction, all liquefaction effects, liquefaction-induced settlement only, and liquefaction-induced lateral spread. In each zone six different intensity measures (IMs) are tested, including peak ground velocity as a measure of ground shaking and five metrics of permanent ground deformation: vertical differential, horizontal, maximum, vector mean and geometric mean. The analysis confirms that the vulnerability of buried cables is influenced more by liquefaction than by ground shaking, and that lateral spread causes more damage than settlement alone. In areas where lateral spreading is observed, the geometric mean permanent ground deformation is identified as the best performing IM across all zones when considering both variance explained and uncertainty. In areas where only settlement is observed, there is only a moderate correlation between repair rate and vertical differential permanent ground deformation but the estimated model error is relatively small and so the model may be acceptable. In general, repair rates in the zone where no liquefaction occurred are very low and it is possible that repairs present in this area result from misclassification of hazard observations, either in the raw data or due to the approximations of the geospatial analysis. Along with hazard intensity, insulation material is identified as a critical factor influencing cable fragility, with paper-insulated lead covered armoured cables experiencing considerably higher repair rates than cross-linked polyethylene cables. The analysis shows no trend between cable age and repair rates and the differences in repair rates between conducting materials is shown not to be significant. In addition to repair rate functions, an example of a fragility curve suite for cables is presented, which may be more useful for analysis of network connectivity where cable functionality is of more interest than the number of repairs. These functions are one of the first to be produced for the prediction of damage to buried cables

    The vulnerability assessment of current buildings by a macroseismic approach derived from the EMS-98 scale

    Get PDF
    A hierarchical family of Damage Probability Matrices (DPM) has been derived in this paper from the ones implicitly contained in the EMS-98 Macroseismic Scale for 6 vulnerability classes. To this aim the linguistic definitions provided by the scale, and the associated fuzzy sub-sets of the percentage of buildings, have been completed according to reliable hypotheses. A parametric representation of the corresponding cumulative probability distributions is moreover provided, through a unique parameter: a vulnerability index variable in the range from 0 to 1 and independent of the macroseismic intensity. Finally, an innovative macroseismic approach allowing the vulnerability analysis of building typologies is defined within the European Macroseismic Scale (EMS-98) and qualitatively related to the vulnerability classes. Bayes’ theorem allows the upgrading of the frequencies when further data about the built-environment or specific properties of the buildings are available, allowing the identification of a different behaviours with respect to the one generally considered for the typology. Fuzzy measures of any damage function can be derived, using parametric or nonparametric damage probability matrices. For every result of the seismic analysis, the procedure allows supply to the user of the final uncertainty connected with the aforementioned fuzzy relation between the probability of the damage grade, the macroseismic intensity and the vulnerability classes

    New Zealand contributions to the global earthquake model’s earthquake consequences database (GEMECD)

    Get PDF
    The Global Earthquake Model’s (GEM) Earthquake Consequences Database (GEMECD) aims to develop, for the first time, a standardised framework for collecting and collating geocoded consequence data induced by primary and secondary seismic hazards to different types of buildings, critical facilities, infrastructure and population, and relate this data to estimated ground motion intensity via the USGS ShakeMap Atlas. New Zealand is a partner of the GEMECD consortium and to-date has contributed with 7 events to the database, of which 4 are localised in the South Pacific area (Newcastle 1989; Luzon 1990; South of Java 2006 and Samoa Islands 2009) and 3 are NZ-specific events (Edgecumbe 1987; Darfield 2010 and Christchurch 2011). This contribution to GEMECD represented a unique opportunity for collating, comparing and reviewing existing damage datasets and harmonising them into a common, openly accessible and standardised database, from where the seismic performance of New Zealand buildings can be comparatively assessed. This paper firstly provides an overview of the GEMECD database structure, including taxonomies and guidelines to collect and report on earthquake-induced consequence data. Secondly, the paper presents a summary of the studies implemented for the 7 events, with particular focus on the Darfield (2010) and Christchurch (2011) earthquakes. Finally, examples of specific outcomes and potentials for NZ from using and processing GEMECD are presented, including: 1) the rationale for adopting the GEM taxonomy in NZ and any need for introducing NZ-specific attributes; 2) a complete overview of the building typological distribution in the Christchurch CBD prior to the Canterbury earthquakes and 3) some initial correlations between the level and extent of earthquake-induced physical damage to buildings, building safety/accessibility issues and the induced human casualtie

    Evaluating Simplified Methods for Liquefaction Assessment for Loss Estimation

    Get PDF
    Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at optimal thresholds. This paper also considers two models (HAZUS and EPOLLS) for estimation of the scale of liquefaction in terms of permanent ground deformation but finds that both models perform poorly, with correlations between observations and forecasts lower than 0.4 in all cases. Therefore these models potentially provide negligible additional value to loss estimation analysis outside of the regions for which they have been developed

    Extreme Events Decision Making in Transport Networks: A Holistic Approach Using Emergency Scenarios and Decision Making Theory

    Get PDF
    This paper proposes a novel method to analyse decision-making during extreme events. The method is based on Decision-making Theory and aims at understanding how emergency managers make decisions during disasters. A data collection framework and an analysis method were conceptualized to capture participant’s behaviour, perception and understanding throughout a game-board simulation exercise, which emulates an earthquake disaster scenario affecting transport systems. The method evaluates the participant’s actions in order to identify decision-making patterns, strengths and weaknesses. A set of case studies has shown two typical patterns, namely: a) Support immediate rescue; b) Support lifelines recovery. Good decision-making practices regard to objective-oriented decision making, understanding of conflicting priorities and appropriate resource management. Weaknesses are associated with comprehending relationships between community/environment and projecting future scenarios. Overall, the case study’s results demonstrate the efficiency and robustness of the proposed method to analyse decision making during disasters

    Numerical simulation of observed liquefaction phenomena from the 2011 Christchurch (New Zealand) event

    Get PDF
    Soil liquefaction at the ground often cause damages to various infrastructure assets. Its consequences have been widely made evident by the performance of the Telecommunication Network Services during the 2010-2011 Canterbury Earthquake Sequence (CES) which stroke the Canterbury region in New Zealand. Despite the relevance of loss of functionality of the telecommunication system, especially during the post-event recovery phase, studies in the literature on the network performance about damages due to liquefaction are still limited. Exploring an unprecedented database of in-situ geotechnical inspections collected after the CES, this research first compares alternative empirical liquefaction-triggering models available in the literature with observation maps. Then, a soil column profile is evaluated adopting a constitutive model based on generalised plasticity (‘modified Pastor-Zienkiewicz’) through a Finite Element based home-developed code. The obtained results from the numerical models are finally crosschecked with the empirical analyses, the existing liquefaction investigation maps and field observations collected in the aftermath of the CES

    Investigation of a Numerical Approach for Assessing Liquefaction and its Effects on Telecom Systems

    Get PDF
    Soil liquefaction has caused substantial infrastructure damage in recent earthquakes. During the 2010-2011 Canterbury Earthquake Sequence (CES), in New Zealand, liquefaction-induced damage to buried cables resulted in service interruption of the telecommunication network. This paper is part of a broader study on the seismic risk of buried infrastructure. It aims to compare numerical and empirical prediction methodologies with the observation maps produced in the aftermath of the Christchurch event. Starting from a description of the New Zealand telecommunication infrastructure and pipelines, the research first explores the vast amount of data and in-situ geotechnical inspections collected after the CES. These data are employed to test several liquefaction-triggering models available in the literature and results are provided through an exploratory spatial analysis. Then, a numerical simulation of a soil profile with and without pipelines from the suburb of Avondale, which was one of the locations most impacted by liquefaction damages, is carried out adopting the Byrne’s formulation for the classic Martin and Finn’s constitutive model in a full dynamic analysis in FLAC-2D. The obtained results from the numerical model are finally cross-checked with the empirical analyses, the existing liquefaction investigation maps, and field observations collected in the aftermath of the event
    corecore