415 research outputs found

    Ocean warming and long-term change in pelagic bird abundance within the California current system

    Get PDF
    As a result of repeated sampling of pelagic bird abundance over 3 x 105 km2 of open ocean 4 times a year for 8 yr, we report that seabird abundance within the California Current system has declined by 40% over the period 1987 to 1994. This decline has accompanied a concurrent, long-term increase in sea surface temperature. The decline in overall bird abundance is largely, but not entirely, a consequence of the 90% decline of sooty shearwaters Puffinus griseus, the numerically dominant species of the California Current. Seabirds of the offshore waters we sampled showed a different pattern from seabirds of the shelf and slope waters. Leach's storm-petrels Oceanodroma leucorhoa, the commonest species offshore, significantly increased during 1987 to 1994, while sooty shearwaters and other inshore species declined. Thus the clearest pattern that emerges from our data is one of gradual but persistent changes in abundance that transpire at time scales longer than 1 yr. Nevertheless, we did find evidence of change at shorter time scales (weeks and months) that may relate to the El Niño episode of 1992 to 1993: Pronounced positive anomalies of abundance of brown pelicans Pelecanus occidentalis and Heerman's gulls Larus heermani in fall 1991, and black Oceanodroma melania andleast O. microsoma storm-petrels in late summer 1992, likely reflect northward dispersal following reproductive failure in the Gulf of California

    Potential and limitations of NARX for defect detection in guided wave signals

    Get PDF
    Previously, a nonlinear autoregressive network with exogenous input (NARX) demonstrated an excellent performance, far outperforming an established method in optimal baseline subtraction, for defect detection in guided wave signals. The principle is to train a NARX network on defect-free guided wave signals to obtain a filter that predicts the next point from the previous points in the signal. The trained network is then applied to new measurement and the output subtracted from the measurement to reveal the presence of defect responses. However, as shown in this paper, the performance of the previous NARX implementation lacks robustness; it is highly dependent on the initialisation of the network and detection performance sometimes improves and then worsens over the course of training. It is shown that this is due to the previous NARX implementation only making predictions one point ahead. Subsequently, it is shown that multi-step prediction using a newly proposed NARX structure creates a more robust training procedure, by enhancing the correlation between the training loss metric and the defect detection performance. The physical significance of the network structure is explored, allowing a simple hyperparameter tuning strategy to be used for determining the optimal structure. The overall detection performance of NARX is also improved by multi-step prediction, and this is demonstrated on defect responses at different times as well as on data from different sensor pairs, revealing the generalisability of this method

    Model documentation, chapter 4

    Get PDF
    The modeling groups are listed along with a brief description of the respective models

    Uncertainty Quantification for Deep Learning in Ultrasonic Crack Characterization

    Get PDF
    Deep learning for nondestructive evaluation (NDE) has received a lot of attention in recent years for its potential ability to provide human level data analysis. However, little research into quantifying the uncertainty of its predictions has been done. Uncertainty quantification (UQ) is essential for qualifying NDE inspections and building trust in their predictions. Therefore, this article aims to demonstrate how UQ can best be achieved for deep learning in the context of crack sizing for inline pipe inspection. A convolutional neural network architecture is used to size surface breaking defects from plane wave imaging (PWI) images with two modern UQ methods: deep ensembles and Monte Carlo dropout. The network is trained using PWI images of surface breaking defects simulated with a hybrid finite element / ray-based model. Successful UQ is judged by calibration and anomaly detection, which refer to whether in-domain model error is proportional to uncertainty and if out of training domain data is assigned high uncertainty. Calibration is tested using simulated and experimental images of surface breaking cracks, while anomaly detection is tested using experimental side-drilled holes and simulated embedded cracks. Monte Carlo dropout demonstrates poor uncertainty quantification with little separation between in and out-of-distribution data and a weak linear fit ( R=0.84 ) between experimental root-mean-square-error and uncertainty. Deep ensembles improve upon Monte Carlo dropout in both calibration ( R=0.95 ) and anomaly detection. Adding spectral normalization and residual connections to deep ensembles slightly improves calibration ( R=0.98 ) and significantly improves the reliability of assigning high uncertainty to out-of-distribution samples

    Community next steps for making globally unique identifiers work for biocollections data

    Get PDF
    Biodiversity data is being digitized and made available online at a rapidly increasing rate but current practices typically do not preserve linkages between these data, which impedes interoperation, provenance tracking, and assembly of larger datasets. For data associated with biocollections, the biodiversity community has long recognized that an essential part of establishing and preserving linkages is to apply globally unique identifiers at the point when data are generated in the field and to persist these identifiers downstream, but this is seldom implemented in practice. There has neither been coalescence towards one single identifier solution (as in some other domains), nor even a set of recommended best practices and standards to support multiple identifier schemes sharing consistent responses. In order to further progress towards a broader community consensus, a group of biocollections and informatics experts assembled in Stockholm in October 2014 to discuss community next steps to overcome current roadblocks. The workshop participants divided into four groups focusing on: identifier practice in current field biocollections; identifier application for legacy biocollections; identifiers as applied to biodiversity data records as they are published and made available in semantically marked-up publications; and cross-cutting identifier solutions that bridge across these domains. The main outcome was consensus on key issues, including recognition of differences between legacy and new biocollections processes, the need for identifier metadata profiles that can report information on identifier persistence missions, and the unambiguous indication of the type of object associated with the identifier. Current identifier characteristics are also summarized, and an overview of available schemes and practices is provided

    Manual for proposing a part of the list of available names (LAN) in zoology

    Get PDF
    Article 79 of the Fourth Edition of the International Code of Zoological Nomenclature (henceforth Code) describes an official List of Available Names in Zoology (henceforth LAN), consisting of a series of “Parts” (of defined taxonomic and temporal scope), compiled by relevant experts. The LAN represents a comprehensive inventory of names available under the Code. The aim of this manual is to define a procedure for implementing Article 79, with format suggestions for zoologists aiming to create a Part of the LAN for family-group, genus-group, or species-group names in zoological nomenclature. Because the LAN may serve as an important basis for retrospective content in ZooBank, the structure outlined here is designed to allow easy importation to ZooBank.Peer Reviewe

    Domain Adapted Deep-Learning for Improved Ultrasonic Crack Characterization Using Limited Experimental Data

    Get PDF
    Deep learning is an effective method for ultrasonic crack characterization due to its high level of automation and accuracy. Simulating the training set has been shown to be an effective method of circumventing the lack of experimental data common to nondestructive evaluation (NDE) applications. However, a simulation can neither be completely accurate nor capture all variability present in the real inspection. This means that the experimental and simulated data will be from different (but related) distributions, leading to inaccuracy when a deep learning algorithm trained on simulated data is applied to experimental measurements. This article aims to tackle this problem through the use of domain adaptation (DA). A convolutional neural network (CNN) is used to predict the depth of surface-breaking defects, with in-line pipe inspection as the targeted application. Three DA methods across varying sizes of experimental training data are compared to two non-DA methods as a baseline. The performance of the methods tested is evaluated by sizing 15 experimental notches of length (1–5 mm) and inclined at angles of up to 20° from the vertical. Experimental training sets are formed with between 1 and 15 notches. Of the DA methods investigated, an adversarial approach is found to be the most effective way to use the limited experimental training data. With this method, and only three notches, the resulting network gives a root-mean-square error (RMSE) in sizing of 0.5 ± 0.037 mm, whereas with only experimental data the RMSE is 1.5 ± 0.13 mm and with only simulated data it is 0.64 ± 0.044 mm

    Interpretable and explainable machine learning for ultrasonic defect sizing

    Get PDF
    Despite its popularity in literature, there are few examples of machine learning (ML) being used for industrial nondestructive evaluation (NDE) applications. A significant barrier is the ‘black box’ nature of most ML algorithms. This paper aims to improve the interpretability and explainability of ML for ultrasonic NDE by presenting a novel dimensionality reduction method: Gaussian feature approximation (GFA). GFA involves fitting a 2D elliptical Gaussian function an ultrasonic image and storing the seven parameters that describe each Gaussian. These seven parameters can then be used as inputs to data analysis methods such as the defect sizing neural network presented in this paper. GFA is applied to ultrasonic defect sizing for inline pipe inspection as an example application. This approach is compared to sizing with the same neural network, and two other dimensionality reduction methods (the parameters of 6 dB drop boxes and principal component analysis), as well as a convolutional neural network applied to raw ultrasonic images. Of the dimensionality reduction methods tested, GFA features produce the closest sizing accuracy to sizing from the raw images, with only a 23% increase in RMSE, despite a 96.5% reduction in the dimensionality of the input data. Implementing ML with GFA is implicitly more interpretable than doing so with principal component analysis or raw images as inputs, and gives significantly more sizing accuracy than 6 dB drop boxes. Shapley additive explanations (SHAP) are used to calculate how each feature contributes to the prediction of an individual defect’s length. Analysis of SHAP values demonstrates that the GFA-based neural network proposed displays many of the same relationships between defect indications and their predicted size as occur in traditional NDE sizing methods
    corecore