5 research outputs found

    Utilizing synthetic training data for the supervised classification of rat ultrasonic vocalizations

    Full text link
    Murine rodents generate ultrasonic vocalizations (USVs) with frequencies that extend to around 120kHz. These calls are important in social behaviour, and so their analysis can provide insights into the function of vocal communication, and its dysfunction. The manual identification of USVs, and subsequent classification into different subcategories is time consuming. Although machine learning approaches for identification and classification can lead to enormous efficiency gains, the time and effort required to generate training data can be high, and the accuracy of current approaches can be problematic. Here we compare the detection and classification performance of a trained human against two convolutional neural networks (CNNs), DeepSqueak and VocalMat, on audio containing rat USVs. Furthermore, we test the effect of inserting synthetic USVs into the training data of the VocalMat CNN as a means of reducing the workload associated with generating a training set. Our results indicate that VocalMat outperformed the DeepSqueak CNN on measures of call identification, and classification. Additionally, we found that the augmentation of training data with synthetic images resulted in a further improvement in accuracy, such that it was sufficiently close to human performance to allow for the use of this software in laboratory conditions.Comment: 25 pages, 5 main figures, 2 table

    Automated production of synthetic point clouds of truss bridges for semantic and instance segmentation using deep learning models

    Get PDF
    The cost of obtaining large volumes of bridge data with technologies like laser scanners hinders the training of deep learning models. To address this, this paper introduces a new method for creating synthetic point clouds of truss bridges and demonstrates the effectiveness of a deep learning approach for semantic and instance segmentation of these point clouds. The method generates point clouds by specifying the dimensions and components of the bridge, resulting in high variability in the generated dataset. A deep learning model is trained using the generated point clouds, which is an adapted version of JSNet. The accuracy of the results surpasses previous heuristic methods. The proposed methodology has significant implications for the development of automated inspection and monitoring systems for truss bridges. Furthermore, the success of the deep learning approach suggests its potential for semantic and instance segmentation of complex point clouds beyond truss bridges.Agencia Estatal de InvestigaciĂłn | Ref. PID2021-124236OB-C33Agencia Estatal de InvestigaciĂłn | Ref. RYC2021-033560-IUniversidade de Vigo/CISU

    Robust license plate recognition using neural networks trained on synthetic images

    No full text
    In this work, we describe a License Plate Recognition (LPR) system designed around convolutional neural networks (CNNs) trained on synthetic images to avoid collecting and annotating the thousands of images required to train a CNN. First, we propose a framework for generating synthetic license plate images, accounting for the key variables required to model the wide range of conditions affecting the aspect of real plates. Then, we describe a modular LPR system designed around two CNNs for plate and character detection enjoying common training procedures and train the CNNs and experiment on three different datasets of real plate images collected from different countries. Our synthetically trained system outper- forms multiple competing systems trained on real images, showing that synthetic images are effective at training a CNNs for LPR if the training images have sufficient variance of the key variables controlling the plate aspect
    corecore