1,152 research outputs found

    Earthquake Arrival Association with Backprojection and Graph Theory

    Full text link
    The association of seismic wave arrivals with causative earthquakes becomes progressively more challenging as arrival detection methods become more sensitive, and particularly when earthquake rates are high. For instance, seismic waves arriving across a monitoring network from several sources may overlap in time, false arrivals may be detected, and some arrivals may be of unknown phase (e.g., P- or S-waves). We propose an automated method to associate arrivals with earthquake sources and obtain source locations applicable to such situations. To do so we use a pattern detection metric based on the principle of backprojection to reveal candidate sources, followed by graph-theory-based clustering and an integer linear optimization routine to associate arrivals with the minimum number of sources necessary to explain the data. This method solves for all sources and phase assignments simultaneously, rather than in a sequential greedy procedure as is common in other association routines. We demonstrate our method on both synthetic and real data from the Integrated Plate Boundary Observatory Chile (IPOC) seismic network of northern Chile. For the synthetic tests we report results for cases with varying complexity, including rates of 500 earthquakes/day and 500 false arrivals/station/day, for which we measure true positive detection accuracy of > 95%. For the real data we develop a new catalog between January 1, 2010 - December 31, 2017 containing 817,548 earthquakes, with detection rates on average 279 earthquakes/day, and a magnitude-of-completion of ~M1.8. A subset of detections are identified as sources related to quarry and industrial site activity, and we also detect thousands of foreshocks and aftershocks of the April 1, 2014 Mw 8.2 Iquique earthquake. During the highest rates of aftershock activity, > 600 earthquakes/day are detected in the vicinity of the Iquique earthquake rupture zone

    Forward and Backward Information Retention for Accurate Binary Neural Networks

    Full text link
    Weight and activation binarization is an effective approach to deep neural network compression and can accelerate the inference by leveraging bitwise operations. Although many binarization methods have improved the accuracy of the model by minimizing the quantization error in forward propagation, there remains a noticeable performance gap between the binarized model and the full-precision one. Our empirical study indicates that the quantization brings information loss in both forward and backward propagation, which is the bottleneck of training accurate binary neural networks. To address these issues, we propose an Information Retention Network (IR-Net) to retain the information that consists in the forward activations and backward gradients. IR-Net mainly relies on two technical contributions: (1) Libra Parameter Binarization (Libra-PB): simultaneously minimizing both quantization error and information loss of parameters by balanced and standardized weights in forward propagation; (2) Error Decay Estimator (EDE): minimizing the information loss of gradients by gradually approximating the sign function in backward propagation, jointly considering the updating ability and accurate gradients. We are the first to investigate both forward and backward processes of binary networks from the unified information perspective, which provides new insight into the mechanism of network binarization. Comprehensive experiments with various network structures on CIFAR-10 and ImageNet datasets manifest that the proposed IR-Net can consistently outperform state-of-the-art quantization methods

    Binary domain generalization for sparsifying binary neural networks

    Full text link
    Binary neural networks (BNNs) are an attractive solution for developing and deploying deep neural network (DNN)-based applications in resource constrained devices. Despite their success, BNNs still suffer from a fixed and limited compression factor that may be explained by the fact that existing pruning methods for full-precision DNNs cannot be directly applied to BNNs. In fact, weight pruning of BNNs leads to performance degradation, which suggests that the standard binarization domain of BNNs is not well adapted for the task. This work proposes a novel more general binary domain that extends the standard binary one that is more robust to pruning techniques, thus guaranteeing improved compression and avoiding severe performance losses. We demonstrate a closed-form solution for quantizing the weights of a full-precision network into the proposed binary domain. Finally, we show the flexibility of our method, which can be combined with other pruning strategies. Experiments over CIFAR-10 and CIFAR-100 demonstrate that the novel approach is able to generate efficient sparse networks with reduced memory usage and run-time latency, while maintaining performance.Comment: Accepted as conference paper at ECML PKDD 202
    • …
    corecore