6,778 research outputs found
SiSeRHMap v1.0: A simulator for mapped seismic response using a hybrid model
SiSeRHMap is a computerized methodology capable of drawing up prediction maps of
seismic response. It was realized on the basis of a hybrid model which combines different
approaches and models in a new and non-conventional way. These approaches
5 and models are organized in a code-architecture composed of five interdependent
modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered
computational structure based on the concept of lithodynamic units and zones,
aims at reproducing a parameterized layered subsoil model. A metamodeling process
confers a hybrid nature to the methodology. In this process, the one-dimensional linear
10 equivalent analysis produces acceleration response spectra of shear wave velocitythickness
profiles, defined as trainers, which are randomly selected in each zone. Subsequently,
a numerical adaptive simulation model (Spectra) is optimized on the above
trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm
(EA) and the Levenberg–Marquardt Algorithm (LMA) as the final optimizer. In the fi15
nal step, the GCM Maps Executor module produces a serial map-set of a stratigraphic
seismic response at different periods, grid-solving the calibrated Spectra model. In addition,
the spectra topographic amplification is also computed by means of a numerical
prediction model. This latter is built to match the results of the numerical simulations
related to isolate reliefs using GIS topographic attributes. In this way, different sets
20 of seismic response maps are developed, on which, also maps of seismic design response
spectra are defined by means of an enveloping technique
Recommended from our members
Joint time-frequency representation of simulated earthquake accelerograms via the adaptive chirplet transform
Seismic accelerograms are inherently nonstationary signals since both the intensity and frequency content of seismic events evolve in time. The adaptive chirplet transform is a signal processing technique for joint time-frequency representation of nonstationary data. Analysis of a signal via the adaptive chirplet decomposition in conjunction with the Wigner-Ville distribution yields the so-called adaptive spectrogram which constitutes a valid representation of the signal in the time-frequency plane. In this paper the potential of this technique for capturing the temporal evolution of the frequency content of strong ground motions is assessed. In this regard, simulated nonstationary earthquake accelerograms compatible with an exponentially modulated and appropriately filtered Kanai-Tajimi spectrum are processed using the adaptive chirplet transform. These are samples of a random process whose evolutionary power spectrum can be represented by an analytical expression. It is suggested that the average of the ensemble of the adaptive chirplet spectrograms can be construed as an estimate of the underlying evolutionary power spectrum. The obtained numerical results show, indeed, that the estimated evolutionary power spectrum is in a good agreement with the one defined analytically. This fact points out the potential of the adaptive chirplet analysis for as a tool for capturing localized frequency content of arbitrary data- banks of real seismic accelerograms
Analysis of earthquake hazards prediction with multivariate adaptive regression splines
Earthquake research has not yielded promising results, either in the form of causes or revealing the timing of their future events. Many methods have been developed, one of which is related to data mining, such as the use of hybrid neural networks, support vector regressor, fuzzy modeling, clustering, and others. Earthquake research has uncertain parameters and to obtain optimal results an appropriate method is needed. In general, several predictive data mining methods are grouped into two categories, namely parametric and non-parametric. This study uses a non-parametric method with multivariate adaptive regression spline (MARS) and conic multivariate adaptive regression spline (CMARS) as the backward stage of the MARS algorithm. The results of this study after parameter testing and analysis obtained a mathematical model with 16 basis functions (BF) and 12 basis functions contributing to the model and 4 basis functions not contributing to the model. Based on the level of variable contribution, it can be written that the epicenter distance is 100 percent, the magnitude is 31.1 percent, the location temperature is 5.5 percent, and the depth is 3.5 percent. It can be concluded that the results of the prediction analysis of areas in Lombok with the highest earthquake hazard level are Malaka, Genggelang, Pemenang, Tanjung, Tegal Maja, Senggigi, Mangsit. Meninting, and Malimbu
Real-time Loss Estimation for Instrumented Buildings
Motivation. A growing number of buildings have been instrumented to measure and record
earthquake motions and to transmit these records to seismic-network data centers to be archived and
disseminated for research purposes. At the same time, sensors are growing smaller, less expensive to
install, and capable of sensing and transmitting other environmental parameters in addition to
acceleration. Finally, recently developed performance-based earthquake engineering methodologies
employ structural-response information to estimate probabilistic repair costs, repair durations, and
other metrics of seismic performance. The opportunity presents itself therefore to combine these
developments into the capability to estimate automatically in near-real-time the probabilistic seismic
performance of an instrumented building, shortly after the cessation of strong motion. We refer to
this opportunity as (near-) real-time loss estimation (RTLE).
Methodology. This report presents a methodology for RTLE for instrumented buildings. Seismic
performance is to be measured in terms of probabilistic repair cost, precise location of likely physical
damage, operability, and life-safety. The methodology uses the instrument recordings and a Bayesian
state-estimation algorithm called a particle filter to estimate the probabilistic structural response of
the system, in terms of member forces and deformations. The structural response estimate is then
used as input to component fragility functions to estimate the probabilistic damage state of structural
and nonstructural components. The probabilistic damage state can be used to direct structural
engineers to likely locations of physical damage, even if they are concealed behind architectural
finishes. The damage state is used with construction cost-estimation principles to estimate
probabilistic repair cost. It is also used as input to a quantified, fuzzy-set version of the FEMA-356
performance-level descriptions to estimate probabilistic safety and operability levels.
CUREE demonstration building. The procedure for estimating damage locations, repair costs, and
post-earthquake safety and operability is illustrated in parallel demonstrations by CUREE and
Kajima research teams. The CUREE demonstration is performed using a real 1960s-era, 7-story, nonductile
reinforced-concrete moment-frame building located in Van Nuys, California. The building is
instrumented with 16 channels at five levels: ground level, floors 2, 3, 6, and the roof. We used the
records obtained after the 1994 Northridge earthquake to hindcast performance in that earthquake.
The building is analyzed in its condition prior to the 1994 Northridge Earthquake. It is found that,
while hindcasting of the overall system performance level was excellent, prediction of detailed damage
locations was poor, implying that either actual conditions differed substantially from those shown on
the structural drawings, or inappropriate fragility functions were employed, or both. We also found
that Bayesian updating of the structural model using observed structural response above the base of
the building adds little information to the performance prediction. The reason is probably that
Real-Time Loss Estimation for Instrumented Buildings
ii
structural uncertainties have only secondary effect on performance uncertainty, compared with the
uncertainty in assembly damageability as quantified by their fragility functions. The implication is
that real-time loss estimation is not sensitive to structural uncertainties (saving costly multiple
simulations of structural response), and that real-time loss estimation does not benefit significantly
from installing measuring instruments other than those at the base of the building.
Kajima demonstration building. The Kajima demonstration is performed using a real 1960s-era
office building in Kobe, Japan. The building, a 7-story reinforced-concrete shearwall building, was not
instrumented in the 1995 Kobe earthquake, so instrument recordings are simulated. The building is
analyzed in its condition prior to the earthquake. It is found that, while hindcasting of the overall
repair cost was excellent, prediction of detailed damage locations was poor, again implying either that
as-built conditions differ substantially from those shown on structural drawings, or that
inappropriate fragility functions were used, or both. We find that the parameters of the detailed
particle filter needed significant tuning, which would be impractical in actual application. Work is
needed to prescribe values of these parameters in general.
Opportunities for implementation and further research. Because much of the cost of applying
this RTLE algorithm results from the cost of instrumentation and the effort of setting up a structural
model, the readiest application would be to instrumented buildings whose structural models are
already available, and to apply the methodology to important facilities. It would be useful to study
under what conditions RTLE would be economically justified. Two other interesting possibilities for
further study are (1) to update performance using readily observable damage; and (2) to quantify the
value of information for expensive inspections, e.g., if one inspects a connection with a modeled 50%
failure probability and finds that the connect is undamaged, is it necessary to examine one with 10%
failure probability
Spectral-element simulations of long-term fault slip: Effect of low-rigidity layers on earthquake-cycle dynamics
We develop a spectral element method for the simulation of long-term histories of spontaneous seismic and aseismic slip on faults subjected to tectonic loading. Our approach reproduces all stages of earthquake cycles: nucleation and propagation of earthquake rupture, postseismic slip and interseismic creep. We apply the developed methodology to study the effects of low-rigidity layers on the dynamics of the earthquake cycle in 2-D. We consider two cases: small (M ~ 1) earthquakes on a fault surrounded by a damaged fault zone and large (M ~ 7) earthquakes on a vertical strike-slip fault that cuts through shallow low-rigidity layers. Our results indicate how the source properties of repeating earthquakes are affected by the presence of a damaged fault zone with low rigidity. Compared to faults in homogeneous media, we find (1) reduction in the earthquake nucleation size, (2) amplification of slip rates during dynamic rupture propagation, (3) larger recurrence interval, and (4) smaller amount of aseismic slip. Based on linear stability analysis, we derive a theoretical estimate of the nucleation size as a function of the width and rigidity reduction of the fault zone layer, which is in good agreement with simulated nucleation sizes. We further examine the effects of vertically-stratified layers (e.g., sedimentary basins) on the nature of shallow coseismic slip deficit. Our results suggest that low-rigidity shallow layers alone do not lead to coseismic slip deficit. While the low-rigidity layers result in lower interseismic stress accumulation, they also cause dynamic amplification of slip rates, with the net effect on slip being nearly zero
적응형 파티클 필터와 앙상블 학습 기법을 이용한 지진동 응답 기반 구조 시스템 파라미터 추정
학위논문 (석사)-- 서울대학교 대학원 : 공과대학 건설환경공학부, 2019. 2. 송준호.경주·포항 지진 발생 이후 사회 기반 시설에 대한 정확한 사후 평가와 모니터링에 대한 사회적 요구가 점차 증가하고 있다. 정확도를 높이기 위해선 시스템 방정식, 즉 시스템 파라미터의 정확한 추정을 통한 시스템 식별이 필수적이다. 그러나, 시스템 파라미터를 직접 추정하는 방법은 많은 시간과 비용이 소요되어, 재난 재해 시 빠른 대처가 불가능하다. 따라서, 제한된 데이터로 시스템을 추정하는 간접 추정 방법이 개발되어왔다.
이를 위해 많은 연구에서 자료 동화에 기반한 기계 학습 방법, 그 중에서도 비선형성이 강한 시스템을 추정하기 위해 개발된 파티클 필터를 사용했다. 파티클 필터는 샘플링에 기반하기 때문에 시스템 파라미터 추정에서 높은 정확도를 달성했다. 그러나, 지진과 같은 극한 상황 중에 발생하는 강성 열화와 같은 구조물의 손상은 시스템 파라미터의 갑작스런 변화를 야기할 수 있다. 이러한 상황에서 기존의 파티클 필터 방법은 시스템 파라미터가 시간에 따라 일정하다고 가정하기 때문에, 추정 성능이 떨어진다는 단점이 있다.
선행 연구에서 급격히 변화하는 시스템 파라미터를 기존 파티클 필터에 비해 정확하게 추정하기 위해 적응형 파티클 필터를 개발하였다. 적응형 파티클 필터는 강성 열화와 같이 시간이 따라 변하는 시스템 파라미터를 추정하기 위해, 상황에 따라 인위적으로 파티클 필터의 파라미터 추정 노이즈를 증가시키는 상수를 도입하여 파티클 필터의 수렴 속도를 증가시킨 추정 방법이다. 본 연구에선, 선행 연구 방법을 발전시켜, 각 자유도에서 얻은 측정치를 기반으로 각각의 파라미터 추정 노이즈에 다른 상수값을 할당하는 수정 적응형 파티클 필터를 제안하고자 한다.
그러나 적응형 파티클 필터는 추정의 편향은 감소하지만 파라미터 추정 노이즈의 증가로 인해, 추정의 분산이 증가하는 문제점을 가지고 있다. 이를 해결하기 위해, 본 연구는 사용 가능한 각각의 병렬 알고리즘에서 각각 얻은 추정치를 조합하여 최종 추정치를 구하는 앙상블 학습법을 도입했다. 그 중에서, 병렬 알고리즘에서 동일한 가중치로 추정치를 조합하여 최종 추정치를 얻는 Bootstrap Aggregating 또는 Bagging 방법을 도입하여 추정의 분산을 감소시키는 방법론을 제안한다.
본 연구에서 제안한 방법을 통해, 보다 정확하고 효과적인 사후 평가 및 모니터링이 수행될 수 있을 것이며, 구조물의 손상에 대한 정확한 진단을 통해 구조물의 응답과 같은 제한된 정보만으로 효과적인 유지관리 및 보수가 가능할 것으로 기대된다.Social demand for accurate post-evaluation and monitoring of infrastructure has been increasing since the earthquake in Gyeongju in 2016 and Pohang in 2017. To increase the accuracy of post-evaluation and monitoring, an accurate estimation of system equation, (i.e. system parameters) is required.
Among other machine learning methods based on the data assimilation, a sampling-based particle filter was used to estimate systems with strong nonlinearity, which achieved high accuracy in estimation of system parameters. However, damage, such as stiffness degradation, that occurs during extreme events can cause sudden changes in the system parameters. The existing methods have shown poor performance in this case because they assume that the system parameters are constant over the time.
In this study, an adaptive particle filter is introduced to accurately estimate system parameters that suddenly change in extreme events. The adaptive particle filter is intended to artificially increase the parameter estimation noise of the particle filter according to the situation in order to estimate the system parameters that change over time as damage occurs. Furthermore, we propose modified adaptive particle filter that allocates different parameter estimation noises to each degree of freedom based on measurements.
However, the adaptive particle filter has the problem of increasing the variance of estimation. Therefore, this study introduces an ensemble learning method that obtains the final estimate by aggregating estimates from usable parallel algorithms. In this study, Bootstrap Aggregating or Bagging is used, which aggregates estimates with the same weight from parallel algorithms to obtain the final estimate.
We expect that a more accurate and effective post-evaluation and monitoring of infrastructures can be carried out, and effective maintenance can be possible through accurate information about the damaged element obtained from the proposed method.1. Introduction 1
1.1. Research Background 1
1.2. Research Objectives and Scope 6
1.3. Outline 8
2. Theoretical Background 9
2.1. Estimation in the State Space 9
2.2 Particle Filter 13
2.2 Adaptive Particle Filter 21
3. Proposed Modified Adaptive Particle Filter with Ensemble Learning Method 26
3.1. Modified Adaptive Particle Filter 26
3.2 Ensemble Learning Method 31
4. Verification of Proposed Method 36
4.1.Numerical Example 36
4.1.1 Target Structural System 36
4.1.2 Ground Acceleration 38
4.1.3 Stiffness Degradation 39
4.2. Verification Results and Discussion 42
4.2.1 Original Particle Filter 42
4.2.2 Modified Adaptive Particle Filter 44
4.2.3 Modified Adaptive Particle Filter with Bagging 45
5. Conclusion 53
REFERENCE 55
국문 초록 58Maste
- …