11 research outputs found

    On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic PTV

    Get PDF
    Many applications in chemistry, biology and medicine use microfluidic devices to separate, detect and analyze samples on a miniaturized size-level. Fluid flows evolving in channels of only several tens to hundreds of micrometers in size are often of a 3D nature, affecting the tailored transport of cells and particles. To analyze flow phenomena and local distributions of particles within those channels, astigmatic particle tracking velocimetry (APTV) has become a valuable tool, on condition that basic requirements like low optical aberrations and particles with a very narrow size distribution are fulfilled. Making use of the progress made in the field of machine vision, deep neural networks may help to overcome these limiting requirements, opening new fields of applications for APTV and allowing them to be used by nonexpert users. To qualify the use of a cascaded deep convolutional neural network (CNN) for particle detection and position regression, a detailed investigation was carried out starting from artificial particle images with known ground truth to real flow measurements inside a microchannel, using particles with uni- and bimodal size distributions. In the case of monodisperse particles, the mean absolute error and standard deviation of particle depth-position of less than and about 1 [my]m were determined, employing the deep neural network and the classical evaluation method based on the minimum Euclidean distance approach. While these values apply to all particle size distributions using the neural network, they continuously increase towards the margins of the measurement volume of about one order of magnitude for the classical method, if nonmonodisperse particles are used. Nevertheless, limiting the depth of measurement volume in between the two focal points of APTV, reliable flow measurements with low uncertainty are also possible with the classical evaluation method and polydisperse tracer particles. The results of the flow measurements presented herein confirm this finding. The source code of the deep neural network used here is available on https://github.com/SECSY-Group/DNN-APTV

    Integrating context-based recommendation with deep NN image classification for plant identification tasks

    Get PDF
    Accurate plant species identification is essential for many scenarios in botanical research and conservation of biodiversity. Since a main obstacle is the large number of possible candidate species to consider, assistance through automatic identification techniques is highly desirable. On one side, photos of plant organs taken by users in the field can effectively be used in machine learning-based image classification, predicting the most likely matching taxa. At the same time, metadata on the user's spatio-temporal context usually goes unused despite its potential to be considered as an additional aspect to augment and improve prediction quality. We develop a recommender system utilizing a user's context to predict a list of plant taxa most likely to be observed at a given geographical location and time. Using a data-driven approach, we integrate knowledge on plant observations, species distribution maps, phenology and environmental geodata in order to calculate contextual recommendations on a local scale. The resulting model facilitates fine-grained ranking of plant taxa expected to occur in close proximity to a user in the field. Focusing on the territory of Germany with a list of the most common wild flowering plant taxa we are presented with a 2.8k class problem. Using a NASNet deep convolutional neural network trained on 860k taxon-labelled plant images we can presently achieve a 82% top-1 prediction accuracy. For a recommender system the combination of biogeographical information, phenology and habitat suitability models is showing viable results, being able to reduce the list of candidate taxa on average more than threefold with a recall of 25% for the top 20 list positions, 50% for the first 70 and a 90% recall for the full recommended list, based on contextual metadata alone. We show how prediction performanc

    Image-based automated recognition of 31 Poaceae species: the most relevant perspectives

    Get PDF
    Poaceae represent one of the largest plant families in the world. Many species are of great economic importance as food and forage plants while others represent important weeds in agriculture. Although a large number of studies currently address the question of how plants can be best recognized on images, there is a lack of studies evaluating specific approaches for uniform species groups considered difficult to identify because they lack obvious visual characteristics. Poaceae represent an example of such a species group, especially when they are non-flowering. Here we present the results from an experiment to automatically identify Poaceae species based on images depicting six well-defined perspectives. One perspective shows the inflorescence while the others show vegetative parts of the plant such as the collar region with the ligule, adaxial and abaxial side of the leaf and culm nodes. For each species we collected 80 observations, each representing a series of six images taken with a smartphone camera. We extract feature representations from the images using five different convolutional neural networks (CNN) trained on objects from different domains and classify them using four state-of-the art classification algorithms. We combine these perspectives via score level fusion. In order to evaluate the potential of identifying non-flowering Poaceae we separately compared perspective combinations either comprising inflorescences or not. We find that for a fusion of all six perspectives, using the best combination of feature extraction CNN and classifier, an accuracy of 96.1% can be achieved. Without the inflorescence, the overall accuracy is still as high as 90.3%. In all but one case the perspective conveying the most information about the species (excluding inflorescence) is the ligule in frontal view. Our results show that even species considered very difficult to identify can achieve high accuracies in automatic identification as long as images depicting suitable perspectives are available. We suggest that our approach could be transferred to other difficult-to-distinguish species groups in order to identify the most relevant perspectives

    The Flora Incognita app - interactive plant species identification

    Get PDF
    Being able to identify plant species is an important factor for understanding biodiversity and its change due to natural and anthropogenic drivers. We discuss the freely available Flora Incognita app for Android, iOS and Harmony OS devices that allows users to interactively identify plant species and capture their observations. Specifically developed deep learning algorithms, trained on an extensive repository of plant observations, classify plant images with yet unprecedented accuracy. By using this technology in a context-adaptive and interactive identification process, users are now able to reliably identify plants regardless of their botanical knowledge level. Users benefit from an intuitive interface and supplementary educational materials. The captured observations in combination with their metadata provide a rich resource for researching, monitoring and understanding plant diversity. Mobile applications such as Flora Incognita stimulate the successful interplay of citizen science, conservation and education

    Combining high-throughput imaging flow cytometry and deep learning for efficient species and life-cycle stage identification of phytoplankton

    Get PDF
    Abstract Background Phytoplankton species identification and counting is a crucial step of water quality assessment. Especially drinking water reservoirs, bathing and ballast water need to be regularly monitored for harmful species. In times of multiple environmental threats like eutrophication, climate warming and introduction of invasive species more intensive monitoring would be helpful to develop adequate measures. However, traditional methods such as microscopic counting by experts or high throughput flow cytometry based on scattering and fluorescence signals are either too time-consuming or inaccurate for species identification tasks. The combination of high qualitative microscopy with high throughput and latest development in machine learning techniques can overcome this hurdle. Results In this study, image based cytometry was used to collect ~ 47,000 images for brightfield and Chl a fluorescence at 60× magnification for nine common freshwater species of nano- and micro-phytoplankton. A deep neuronal network trained on these images was applied to identify the species and the corresponding life cycle stage during the batch cultivation. The results show the high potential of this approach, where species identity and their respective life cycle stage could be predicted with a high accuracy of 97%. Conclusions These findings could pave the way for reliable and fast phytoplankton species determination of indicator species as a crucial step in water quality assessment

    Pollen analysis using multispectral imaging flow cytometry and deep learning

    Get PDF
    Pollen identification and quantification are crucial but challenging tasks in addressing a variety of evolutionary and ecological questions (pollination, paleobotany), but also for other fields of research (e.g. allergology, honey analysis or forensics). Researchers are exploring alternative methods to automate these tasks but, for several reasons, manual microscopy is still the gold standard. In this study, we present a new method for pollen analysis using multispectral imaging flow cytometry in combination with deep learning. We demonstrate that our method allows fast measurement while delivering high accuracy pollen identification. A dataset of 426 876 images depicting pollen from 35 plant species was used to train a convolutional neural network classifier. We found the best-performing classifier to yield a species-averaged accuracy of 96%. Even species that are difficult to differentiate using microscopy could be clearly separated. Our approach also allows a detailed determination of morphological pollen traits, such as size, symmetry or structure. Our phylogenetic analyses suggest phylogenetic conservatism in some of these traits. Given a comprehensive pollen reference database, we provide a powerful tool to be used in any pollen study with a need for rapid and accurate species identification, pollen grain quantification and trait extraction of recent pollen

    Image-based classification of plant genus and family for trained and untrained plant species

    Get PDF
    Abstract Background Modern plant taxonomy reflects phylogenetic relationships among taxa based on proposed morphological and genetic similarities. However, taxonomical relation is not necessarily reflected by close overall resemblance, but rather by commonality of very specific morphological characters or similarity on the molecular level. It is an open research question to which extent phylogenetic relations within higher taxonomic levels such as genera and families are reflected by shared visual characters of the constituting species. As a consequence, it is even more questionable whether the taxonomy of plants at these levels can be identified from images using machine learning techniques. Results Whereas previous studies on automated plant identification from images focused on the species level, we investigated classification at higher taxonomic levels such as genera and families. We used images of 1000 plant species that are representative for the flora of Western Europe. We tested how accurate a visual representation of genera and families can be learned from images of their species in order to identify the taxonomy of species included in and excluded from learning. Using natural images with random content, roughly 500 images per species are required for accurate classification. The classification accuracy for 1000 species amounts to 82.2% and increases to 85.9% and 88.4% on genus and family level. Classifying species excluded from training, the accuracy significantly reduces to 38.3% and 38.7% on genus and family level. Excluded species of well represented genera and families can be classified with 67.8% and 52.8% accuracy. Conclusion Our results show that shared visual characters are indeed present at higher taxonomic levels. Most dominantly they are preserved in flowers and leaves, and enable state-of-the-art classification algorithms to learn accurate visual representations of plant genera and families. Given a sufficient amount and composition of training data, we show that this allows for high classification accuracy increasing with the taxonomic level and even facilitating the taxonomic identification of species excluded from the training process

    Flora Capture: a citizen science application for collecting structured plant observations

    Get PDF
    Digital plant images are becoming increasingly important. First, given a large number of images deep learning algorithms can be trained to automatically identify plants. Second, structured image-based observations provide information about plant morphological characteristics. Finally in the course of digitalization, digital plant collections receive more and more interest in schools and universities

    Opportunistic plant observations reveal spatial and temporal gradients in phenology

    No full text
    Abstract Opportunistic plant records provide a rapidly growing source of spatiotemporal plant observation data. Here, we used such data to explore the question whether they can be used to detect changes in species phenologies. Examining 19 herbaceous and one woody plant species in two consecutive years across Europe, we observed significant shifts in their flowering phenology, being more pronounced for spring-flowering species (6-17 days) compared to summer-flowering species (1-6 days). Moreover, we show that these data are suitable to model large-scale relationships such as “Hopkins’ bioclimatic law” which quantifies the phenological delay with increasing elevation, latitude, and longitude. Here, we observe spatial shifts, ranging from –5 to 50 days per 1000 m elevation to latitudinal shifts ranging from –1 to 4 days per degree northwards, and longitudinal shifts ranging from –1 to 1 day per degree eastwards, depending on the species. Our findings show that the increasing volume of purely opportunistic plant observation data already provides reliable phenological information, and therewith can be used to support global, high-resolution phenology monitoring in the face of ongoing climate change
    corecore