24 research outputs found

    Methods of Generating Plants for Computer Graphics

    Get PDF
    Tato bakalářská práce se zabývá metodami využitelnými pro generování rostlin pro počítačovou grafiku. Nejpodrobněji je zde probraná metoda generování pomocí L-systémů. Práce popisuje jednotlivé rozšíření 0L-systémů. Pro získání geometrické reprezentace modelu je výstup L-systému zpracován želví grafikou. Geometrická reprezentace modelu je následně pomocí grafické knihovny OpenGL vykreslena. Textury jsou generovány Perlinovou šumovou funkcí implementovanou v jazyce GLSL.This Bachelor's thesis deals with methods useable for a plant generation for computer graphics. Most detailed described method is generation by L-system. The thesis describes idividually extensions of 0L-systems. To get a geometric representation of model, the output of the L-system is proceed by turtle graphics. The geometric representation of model is drawn using OpenGL graphics library. Textures are generated by Perlin's noise function implemented in GLSL.

    A deep learning method for visual recognition of snake species

    Get PDF
    The paper presents a method for image-based snake species identification. The proposed method is based on deep residual neural networks - ResNeSt, ResNeXt and ResNet - fine-tuned from ImageNet pre-trained checkpoints. We achieve performance improvements by: discarding predictions of species that do not occur in the country of the query; combining predictions from an ensemble of classifiers; and applying mixed precision training, which allows training neural networks with larger batch size. We experimented with loss functions inspired by the considered metrics: soft F1 loss and weighted cross entropy loss. However, the standard cross entropy loss achieved superior results both in accuracy and in F1 measures. The proposed method scored third in the SnakeCLEF 2021 challenge, achieving 91.6% classification accuracy, Country F1 Score of 0.860, and F1 Score of 0.830

    Plant recognition by AI: Deep neural nets, transformers, and kNN in deep embeddings

    Get PDF
    The article reviews and benchmarks machine learning methods for automatic image-based plant species recognition and proposes a novel retrieval-based method for recognition by nearest neighbor classification in a deep embedding space. The image retrieval method relies on a model trained via the Recall@k surrogate loss. State-of-the-art approaches to image classification, based on Convolutional Neural Networks (CNN) and Vision Transformers (ViT), are benchmarked and compared with the proposed image retrieval-based method. The impact of performance-enhancing techniques, e.g., class prior adaptation, image augmentations, learning rate scheduling, and loss functions, is studied. The evaluation is carried out on the PlantCLEF 2017, the ExpertLifeCLEF 2018, and the iNaturalist 2018 Datasets-the largest publicly available datasets for plant recognition. The evaluation of CNN and ViT classifiers shows a gradual improvement in classification accuracy. The current state-of-the-art Vision Transformer model, ViT-Large/16, achieves 91.15% and 83.54% accuracy on the PlantCLEF 2017 and ExpertLifeCLEF 2018 test sets, respectively; the best CNN model (ResNeSt-269e) error rate dropped by 22.91% and 28.34%. Apart from that, additional tricks increased the performance for the ViT-Base/32 by 3.72% on ExpertLifeCLEF 2018 and by 4.67% on PlantCLEF 2017. The retrieval approach achieved superior performance in all measured scenarios with accuracy margins of 0.28%, 4.13%, and 10.25% on ExpertLifeCLEF 2018, PlantCLEF 2017, and iNat2018-Plantae, respectively

    Overview of FungiCLEF 2022: Fungi Recognition as an Open Set Classification Problem

    Get PDF
    The main goal of the new LifeCLEF challenge, FungiCLEF 2022: Fungi Recognition as an Open Set Classification Problem, was to provide an evaluation ground for end-to-end fungi species recognition in an open class set scenario. An AI-based fungi species recognition system deployed in the Atlas of Danish Fungi helps mycologists to collect valuable data and allows users to learn about fungi species identification. Advances in fungi recognition from images and metadata will allow continuous improvement of the system deployed in this citizen science project. The training set is based on the Danish Fungi 2020 dataset and contains 295,938 photographs of 1,604 species. For testing, we provided a collection of 59,420 expert-approved observations collected in 2021. The test set includes 1,165 species from the training set and 1,969 unknown species, leading to an open-set recognition problem. This paper provides (i) a description of the challenge task and datasets, (ii) a summary of the evaluation methodology, (iii) a review of the systems submitted by the participating teams, and (iv) a discussion of the challenge results. © 2022 Copyright for this paper by its authors

    Automatic Fungi Recognition: Deep Learning Meets Mycology

    Get PDF
    The article presents an AI-based fungi species recognition system for a citizen-science community. The system’s real-time identification too — FungiVision — with a mobile application front-end, led to increased public interest in fungi, quadrupling the number of citizens collecting data. FungiVision, deployed with a human-in-the-loop, reaches nearly 93% accuracy. Using the collected data, we developed a novel fine-grained classification dataset — Danish Fungi 2020 (DF20) — with several unique characteristics: species-level labels, a small number of errors, and rich observation metadata. The dataset enables the testing of the ability to improve classification using metadata, e.g., time, location, habitat and substrate, facilitates classifier calibration testing and finally allows the study of the impact of the device settings on the classification performance. The continual flow of labelled data supports improvements of the online recognition system. Finally, we present a novel method for the fungi recognition service, based on a Vision Transformer architecture. Trained on DF20 and exploiting available metadata, it achieves a recognition error that is 46.75% lower than the current system. By providing a stream of labeled data in one direction, and an accuracy increase in the other, the collaboration creates a virtuous cycle helping both communities

    Danish Fungi 2020 - Not Just Another Image Recognition Dataset

    Get PDF
    We introduce a novel fine-grained dataset and bench-mark, the Danish Fungi 2020 (DF20). The dataset, constructed from observations submitted to the Atlas of Danish Fungi, is unique in its taxonomy-accurate class labels, small number of errors, highly unbalanced long-tailed class distribution, rich observation metadata, and well-defined class hierarchy. DF20 has zero overlap with ImageNet, al-lowing unbiased comparison of models fine-tuned from publicly available ImageNet checkpoints. The proposed evaluation protocol enables testing the ability to improve classification using metadata - e.g. precise geographic location, habitat, and substrate, facilitates classifier calibration testing, and finally allows to study the impact of the device settings on the classification performance. Experiments using Convolutional Neural Networks (CNN) and the recent Vision Transformers (ViT) show that DF20 presents a challenging task. Interestingly, ViT achieves results su-perior to CNN baselines with 80.45% accuracy and 0.743 macro F1 score, reducing the CNN error by 9% and 12% respectively. A simple procedure for including metadata into the decision process improves the classification accuracy by more than 2.95 percentage points, reducing the error rate by 15%. The source code for all methods and experiments is available at https://sites.google.com/view/danish-fungi-dataset

    Nástroje pro hodnocení množství a jakosti vod

    No full text
    151

    Detekce prvků webového uživatelského rozhraní s Faster R-CNN

    No full text
    Při navrhování nových uživatelských rozhraní (UI) může nastat několik problémů, například při komunikaci mezi designéry a vývojáři, čemuž detekce prvků UI může pomoci. ImageCLEF DrawnUI 2021 Challenge staví na detekci takovýchto prvků ve dvou soutěžních úkolech: Screenshot task, který obsahuje snímky webových obrazovek se spoustou chybně anotovaných dat, a Wireframe task pro detekci prvků z ručně kreslených návrhů. Tento článek popisuje jednoduchý algoritmus založený na hranovém detektoru pro filtrování chybných dat ze snímků obrazovky a metodu strojového učení. Zvolený postup vyhrál první místo v obou soutěžních úkolech Screenshot a Wireframe s 0,628 a 0,900 mAP při 0,5 IoU. Zvolená metoda strojového učení je založena na Faster R-CNN s Feature Pyramid Network (FPN) a používá vybrané poměry stran boxů podle jejich výskytů v dostupných datech. Kód je k dispozici na https://github.com/vyskocj/ImageCLEFdrawnUI2021Several challenges may arise when designing new user interfaces (UIs), e.g., because of communication between designers and developers, to which the detection of UI elements can help. The ImageCLEF DrawnUI 2021 challenge builds on the detection of such elements in two contest tasks: a Screenshot task that contains the website screenshot images with lots of noisy data, and a Wireframe task for detecting UI elements from hand-drawn proposals. This paper describes a simple algorithm based on the edge detection to filter noisy data from the website screenshots, and machine learning method which scored the first place in both tasks while having 0.628 and 0.900 mAP at 0.5 IoU in the Screenshot and Wireframe tasks. This method is based on the Faster R-CNN with a Feature Pyramid Network (FPN) that uses selected aspect ratios of anchor boxes according to the occurrences from the datasets. The code is available at https://github.com/vyskocj/ImageCLEFdrawnUI202
    corecore