225 research outputs found

    Analysis of local earthquake data using artificial neural networks

    Get PDF

    Artificial neural networks as emerging tools for earthquake detection

    Get PDF
    As seismic networks continue to spread and monitoring sensors become more ef¿cient, the abundance of data highly surpasses the processing capabilities of earthquake interpretation analysts. Earthquake catalogs are fundamental for fault system studies, event modellings, seismic hazard assessment, forecasting, and ultimately, for mitigating the seismic risk. These have fueled the research for the automation of interpretation tasks such as event detection, event identi¿cation, hypocenter location, and source mechanism analysis. Over the last forty years, traditional algorithms based on quantitative analyses of seismic traces in the time or frequency domain, have been developed to assist interpretation. Alternatively, recentadvancesarerelatedtotheapplicationofArti¿cial Neural Networks (ANNs), a subset of machine learning techniques that is pushing the state-of-the-art forward in many areas. Appropriated trained ANN can mimic the interpretation abilities of best human analysts, avoiding the individual weaknesses of most traditional algorithms, and spending modest computational resources at the operational stage. In this paper, we will survey the latest ANN applications to the automatic interpretation of seismic data, with a special focus on earthquake detection, and the estimation of onset times. For a comparative framework, we give an insight into the labor of human interpreters, who may face uncertainties in the case of small magnitude earthquakes.Peer ReviewedPostprint (published version

    Interpretation of broad-band seismograms

    Get PDF

    Consistent phase picking for regional tomography models: application to the greater Alpine region

    Get PDF
    The resolution and reliability of tomographic velocity models strongly depends on quality and consistency of available traveltime data. Arrival times routinely picked by network analysts on a day-to-day basis often yield a high level of noise due to mispicks and other inconsistencies, particularly in error assessment. Furthermore, tomographic studies at regional scales require merging of phase picks from several networks. Since a common quality assessment is not usually available for phase data provided by different networks, additional inconsistencies are introduced by the merging process. Considerable improvement in the quality of phase data can only be achieved through complete repicking of seismograms. Considering the amount of data necessary for regional high-resolution tomography, algorithms combining accurate picking with an automated error assessment represent the best tool to derive large suitable data sets. In this work, we present procedures for consistent automated and routine picking of P-wave arrival times at local to regional scales including consistent picking error assessment. Quality-attributed automatic picks are derived from the MPX picking system. The application to earthquakes in the greater Alpine region demonstrates the potential of such a repicking approach. The final data set consists of more than 13 000 high-quality first-arrivals and it is used to derive regional 1-D and preliminary 3-D P-wave models of the greater Alpine region. The comparison with a tomographic model based on routine phase data extracted from the ISC Bulletin illustrates effects on tomographic results due to consistency and reliability of our high-quality data se

    Automatic seismic event recognition and later phase identification for broadband seismograms

    Get PDF
    Knowledge of the patterns of frequently observed seismic phases associated with specific distances and depths have been well developed and applied by seismologists (see, e.g., Richter, 1958; Kulhánek, 1990). However, up till now, the expertise of recognizing seismic event patterns for teleseisms has not been translated into automatic processing procedure. A new approach is developed to automate this kind of heuristic human expertise in order to provide a means of improving preliminary event locations from a single site. An automatic interpretation system exploiting three-component broadband seismograms is used to recognize the pattern of seismic arrivals associated with the presence of a seismic event in real time accompanied by an identification of the individual phases. For a single station, such a real-time analysis can be used to provide a preliminary estimation of the location of the event. The inputs to the interpretation process are a set of features for detected phases produced by another real-time phase analyzer. The combinations of these features are investigated using a novel approach to the construction of an expert system. The automatic system exploits expert information to test likely assumptions about phase character and hence epicentral distance and depth. Some hypotheses about the nature of the event will be rejected as implausible, and for the remainder, an assessment is given of the likelihood of the interpretation based on the fit to the character of all available information. This event-recognition procedure provides an effective and feasible means of interprating events at all distances, and characterizing information between hundreds of different possible classes of patterns even when the observation is incomplete. The procedure is based on “assumption trees” and provides a useful tool for classification problems in which a number of factors have to be identified. The control set of expert knowledge used in testing hypotheses is maintained separately from the computational algorithm used in the assumption search; in consequence, the information base can be readily updated

    Optimizing event detection and location in low‐seismicity zones: case study from western Switzerland

    Get PDF
    Obtaining robust event catalogs in regions of low seismicity can be time-consuming, because quality events are less frequent and sensor coverage is generally sparse. Optimizing event detection and location in such regions is all the more crucial because these areas tend to host a higher density of sensitive infrastructures. The meth- odology proposed consists of reprocessing existing data recorded by a permanent net-work and boosting the final catalog resolution by temporarily deploying portable sparse mini-arrays in the target area. Sonogram analysis is applied on both existing and new datasets to detect waveforms barely emerging from the background noise. A visual interactive event analysis module is used to test for phase picking, event asso- ciation, waveform cross correlation, and location ambiguities. It also estimates back azimuth and slowness when sparse array data are available. The method is applied to a low-seismicity region in the western Swiss Molasse basin where two sparse mini- arrays were temporarily deployed. The detection of earthquakes is improved by a fac- tor of 9 when reprocessing four yrs (2009–2013) of available data recorded by two accelerometers and one broadband station in a 2500 km2 target area. Magnitude estimations are empirically calibrated over four magnitude units, down to −1:7 ML, lowering the existing catalog completeness by close to one magnitude unit. After validating picking and location accuracies with a standard residual-based scheme, 174 newly detected events are relocated, illuminating zones of previously undetected microseismic activity

    Automated seismogram analysis for the tripartite BUG array : an introduction

    Get PDF
    The tasks for automated epicenter determination in the Bochum University Germany (BUG) small array are subdivided for different signal-processing modules that utilize knowledge-based approaches. The modules are designed for complementary advantages to yield best system performance in an interdependent architecture. This “bottom-up” solution proceeds from reliable waveform parameters to more simple interpretation rules than in seismic expert systems that must cope with traditional detectors as erratic front ends

    Temporal and spectral pattern recognition for detection and combined network and array waveform coherence analysis for location of seismic events

    Get PDF
    The reliable automatic detection, location and classification of seismic events still poses great challenges if only few sensors record an event and/or the signal-to-noise ratio is very low. This study first examines, compares and evaluates the most widely used algorithms for automatic processing on a diverse set of seismic datasets (e.g. from induced seismicity and nuclear-test-ban verification experiments). A synthesis of state-of-the-art algorithms is given. Several single station event detection and phase picking algorithms are tested followed by a comparison of single station waveform cross-correlation and spectral pattern recognition. Coincidence analysis is investigated afterwards to demonstrate up to which level false alarms can be ruled out in sensor networks of multiple stations. It is then shown how the use of seismic (mini) arrays in diverse configurations can improve these results considerably through the use of waveform coherence. In a second step, two concepts are presented which combine the previously analysed algorithmic building blocks in a new way. The first concept is seismic event signal clustering by unsupervised learning which allows event identification with only one sensor. The study serves as a base level investigation to explore the limits of elementary seismic monitoring with only one single vertical-component seismic sensor and shows the level of information which can be extracted from a single station. It is investigated how single station event signal similarity clusters relate to geographic hypocenter regions and common source processes. Typical applications arise in local seismic networks where reliable ground truth by a dense temporal network precedes or follows a sparse (permanent) installation. The test dataset comprises a three-month subset from a field campaign to map subduction below northern Chile, project for the seismological investigation of the western cordillera (PISCO). Due to favourable ground noise conditions in the Atacama desert, the dataset contains an abundance of shallow and deep earthquakes, and many quarry explosions. Often event signatures overlap, posing a challenge to any signal processing scheme. Pattern recognition must work on reduced seismograms to restrict parameter space. Continuous parameter extraction based on noise-adapted spectrograms was chosen instead of discrete representation by, e.g. amplitudes, onset times, or spectral ratios to ensure consideration of potentially hidden features. Visualization of the derived feature vectors for human inspection and template matching algorithms was hereby possible. Because event classes shall comprise earthquake regions regardless of magnitude, signal clustering based on amplitudes is prevented by proper normalization of feature vectors. Principal component analysis (PCA) is applied to further reduce the number of features used to train a self-organizing map (SOM). The SOM arranges prototypes of each event class in a 2D map topologically. Overcoming the restrictions of this black-box approach, the arranged prototypes can be transformed back to spectrograms to allow for visualization and interpretation of event classes. The final step relates prototypes to ground-truth information, confirming the potential of automated, coarse-grain hypocenter clustering based on single station seismograms. The approach was tested by a two-fold cross-validation whereby multiple sets of feature vectors from half the events are compared by a one-nearest neighbour classifier in combination with an euclidean distance measure resulting in an overall correct geographic separation rate of 95.1% for coarse clusters and 80.5% for finer clusters (86.3% for a more central station). The second concept shows a new method to combine seismic networks of single stations and arrays for automatic seismic event location. After exploring capabilities of single station algorithms in the section before, this section explores capabilities of algorithms for small local seismic networks. Especially traffic light systems for induced seismicity monitoring rely on the real-time automated location of weak events. These events suffer from low signal-to-noise ratios and noise spikes due to the industrial setting. Conventional location methods rely on independent picking of first arrivals from seismic wave onsets at recordings of single stations. Picking is done separately and without feedback from the actual location algorithm. With low signal-to-noise ratios and local events, the association of onsets gets error prone, especially for S-phase onsets which are overlaid by coda from previous phases. If the recording network is small or only few phases can be associated, single wrong associations can lead to large errors in hypocenter locations and magnitude. Event location by source scanning which was established in the last two decades can provide more robust results. Source scanning uses maxima from a travel time corrected stack of a characteristic function of the full waveforms on a predefined location grid. This study investigates how source-scanning can be extended and improved by integrating information from seismic arrays, i.e. waveform stacking and Fisher ratio. These array methods rely on the coherency of the raw filtered waveforms while traditional source scanning uses a characteristic function to obtain coherency from otherwise incoherent waveforms between distant stations. The short term average to long term average ratio (STA/LTA) serves as the characteristic function and single station vertical-component traces for P-phases and radial and transverse components for S-phases are used. For array stations, the STA/LTA of the stacked vertical seismogram which is furthermore weighted by the STA/LTA of the Fisher ratio, dependent on back azimuth and slowness, is utilized for P-phases. In the chosen example, the extension by array-processing techniques can reduce the mean error in comparison to manually determined hypocenters by up to a factor of 2.9, resolve ambiguities and further restrain the location

    Single-trace detection and array-wide coincidence association of local earthquakes and explosions

    Get PDF
    Local earthquakes and explosions can be recognized automatedly for the Bochum University Germany (BUG) small array by a sequence of knowledge-based approaches performed in the field and in the central hub. In single-trace detection, the recognition is based on sonogram patterns adapted for a wide variety of noise conditions on all array sites. The adaptation is performed by two steps: first each pattern is adjusted to the actual signal energy, second all those weaker phases that are below the new detection threshold are excluded. In the hub, a rule-based approach performs the coincidence evaluation. It is described by its 14 rules and the implicit assumptions. This scheme was tested on 1 month of data. The knowledge base consisted of 12 seismograms transformed automatically into the detector's internal knowledge representation of sonograms. The results show excellent performance for noise rejection and quarry blast recognition; for earthquakes clustering, a 85% success is achieved. The network success - usually below the best single performance - could be improved above any single-station optimum. Results of the rule-based approach are compared to the routine processing of the same data by Walsh-detection and the ‘2 of 4’ coincidence voting
    corecore