19 research outputs found

    Inselect: Automating the Digitization of Natural History Collections

    Get PDF
    Copyright: © 2015 Hudson et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. The attached file is the published version of the article

    Array programming with NumPy.

    Get PDF
    Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis

    A reusable neural network pipeline for unidirectional fiber segmentation.

    No full text
    Fiber-reinforced ceramic-matrix composites are advanced, temperature resistant materials with applications in aerospace engineering. Their analysis involves the detection and separation of fibers, embedded in a fiber bed, from an imaged sample. Currently, this is mostly done using semi-supervised techniques. Here, we present an open, automated computational pipeline to detect fibers from a tomographically reconstructed X-ray volume. We apply our pipeline to a non-trivial dataset by Larson et al. To separate the fibers in these samples, we tested four different architectures of convolutional neural networks. When comparing our neural network approach to a semi-supervised one, we obtained Dice and Matthews coefficients reaching up to 98%, showing that these automated approaches can match human-supervised methods, in some cases separating fibers that human-curated algorithms could not find. The software written for this project is open source, released under a permissive license, and can be freely adapted and re-used in other domains

    An introduction to diffusion maps

    No full text
    Please help us populate SUNScholar with the post print version of this article. It can be e-mailed to: [email protected] Wiskund

    iCollections butterfly images selected for automated measurement

    No full text
    Images of butterflies from the Natural History Museum (NHMUK) iCollections project, used for automated wing length measurement with&nbsp;https://github.com/machine-shop/butterfly-wings Original iCollections dataset:&nbsp;https://doi.org/10.5519/0038559 &nbsp; </span

    Applying computer vision to digitised natural history collections for climate change research: temperature-size responses in British butterflies

    No full text
    1. Natural history collections are invaluable resources for understanding biotic response to global change. Museums around the world are currently imaging specimens, capturing specimen data, and making them freely available online. In parallel to the digitisation effort, there have been great advancements in computer vision: the computer trained automated recognition/detection, and measurement of features in digital images. Applying computer vision to digitised natural history collections has the potential to greatly accelerate the use of these collections for biotic response to global change research. In this paper, we apply computer vision to a very large, digitised collection to test hypotheses in an established area of biotic response to climate change research: temperature-size responses. 2. We develop a computer vision pipeline (Mothra) and apply it to the NHM collection of British butterflies (&gt;180,000 imaged specimens). Mothra automatically detects the specimen and other objects in the image, sets the scale, measures wing features (e.g., forewing length), determines the orientation of the specimen (pinned ventrally or dorsally), and identifies the sex. We pair these measurements and specimen collection data with temperature records for 17,726 specimens across a subset of 24 species to test how adult size varies with temperature during the immature stages of species. We also assess patterns of sexual-size dimorphism across species and families for 32 species trained for automated sex ID. 3. Mothra accurately measures the forewing lengths of butterfly specimens compared to manual measurements and accurately determines the sex of specimens. Females are the larger sex in most species and an increase in adult body size with warmer monthly temperatures during the late larval stages is the most common temperature size response. These results confirm suspected patterns and support hypotheses based on recent studies using a smaller dataset of manually measured specimens. 4. We show that computer vision can be a powerful tool to efficiently and accurately extract phenotypic data from a very large collection of digital natural history collections. In the future, computer vision will become widely applied to digital collections to advance ecological and evolutionary research and to accelerate their use to investigate biotic response to global change

    A fluoroscopy-based planning and guidance software tool for minimally invasive hip refixation by cement injection

    No full text
    \u3cp\u3ePurpose: In orthopaedics, minimally invasive injection of bone cement is an established technique. We present HipRFX, a software tool for planning and guiding a cement injection procedure for stabilizing a loosening hip prosthesis. HipRFX works by analysing a pre-operative CT and intraoperative C-arm fluoroscopic images. Methods: HipRFX simulates the intraoperative fluoroscopic views that a surgeon would see on a display panel. Structures are rendered by modelling their X-ray attenuation. These are then compared to actual fluoroscopic images which allow cement volumes to be estimated. Five human cadaver legs were used to validate the software in conjunction with real percutaneous cement injection into artificially created periprothetic lesions. Results: Based on intraoperatively obtained fluoroscopic images, our software was able to estimate the cement volume that reached the pre-operatively planned targets. The actual median target lesion volume was 3.58 ml (range 3.17–4.64 ml). The median error in computed cement filling, as a percentage of target volume, was 5.3 % (range 2.2–14.8 %). Cement filling was between 17.6 and 55.4 % (median 51.8 %). Conclusions: As a proof of concept, HipRFX was capable of simulating intraoperative fluoroscopic C-arm images. Furthermore, it provided estimates of the fraction of injected cement deposited at its intended target location, as opposed to cement that leaked away. This level of knowledge is usually unavailable to the surgeon viewing a fluoroscopic image and may aid in evaluating the success of a percutaneous cement injection intervention.\u3c/p\u3
    corecore