28 research outputs found

    Detection of face spoofing attacks on biometric identification systems

    Get PDF
    The article considers methods for the detection and recognition of spoofing attacks on biometric protection systems using the human face, analyses their qualitative indicators and analyses the approach using convolutional neural networks that would allow to obtain the best HTER for the future protection system. The obtained result allowed to highlight the advantages and disadvantages in the design of an attack detection system in the considered area of application. The proposed algorithm of the spoofing attack detection system based on convolutional neural networks using image depth maps

    An efficient convolutional neural network based classifier to predict Tamil writer

    Get PDF
    Identification of Tamil handwritten calligraphies at different levels such as character, word and paragraph is complicated when compared to other western language scripts. None of the existing methods provides efficient Tamil handwriting writer identification (THWI). Also offline Tamil handwritten identification at different levels still offers many motivating challenges to researchers. This paper employs a deep learning algorithm for handwriting image classification. Deep learning has its own dimensions to generate new features from a limited set of training dataset. Convolutional Neural Networks (CNNs) is one of deep, feed-forward artificial neural network is applied to THWI. The dataset collection and classification phase of CNN enables data access and automatic feature generation. Since the number of parameters is significantly reduced, training time to THWI is proportionally reduced. Understandably, the CNNs produced much higher identification rate compared with traditional ANN at different levels of handwriting

    Testing the ability of Unmanned Aerial Systems and machine learning to map weeds at subfield scales: a test with the weed Alopecurus myosuroides (Huds).

    Get PDF
    BACKGROUND: It is important to map agricultural weed populations in order to improve management and maintain future food security. Advances in data collection and statistical methodology have created new opportunities to aid in the mapping of weed populations. We set out to apply these new methodologies (Unmanned Aerial Systems - UAS) and statistical techniques (Convolutional Neural Networks - CNN) for the mapping of black-grass, a highly impactful weed in wheat fields in the UK. We tested this by undertaking an extensive UAS and field-based mapping over the course of two years, in total collecting multispectral image data from 102 fields, with 76 providing informative data. We used these data to construct a Vegetation Index (VI), that we used to train a custom CNN model from scratch. We undertook a suite of data engineering techniques, such as balancing and cleaning to optimize performance of our metrics. We also investigate the transferability of the models from one field to another. RESULTS: The results show that our data collection methodology and implementation of CNN outperform pervious approaches in the literature. We show that data engineering to account for "artefacts" in the image data increases our metrics significantly. We are not able to identify any traits that are shared between fields that result in high scores from our novel leave one field our cross validation (LOFO-CV) tests. CONCLUSION: We conclude that this evaluation procedure is a better estimation of real-world predictive value when compared to past studies. We conclude that by engineering the image data set into discrete classes of data quality we increase the prediction accuracy from the baseline model by 5% to an AUC of 0.825. We find that the temporal effects studied here have no effect on our ability to model weed densities

    SpatioTemporal Feature Integration and Model Fusion for Full Reference Video Quality Assessment

    Full text link
    Perceptual video quality assessment models are either frame-based or video-based, i.e., they apply spatiotemporal filtering or motion estimation to capture temporal video distortions. Despite their good performance on video quality databases, video-based approaches are time-consuming and harder to efficiently deploy. To balance between high performance and computational efficiency, Netflix developed the Video Multi-method Assessment Fusion (VMAF) framework, which integrates multiple quality-aware features to predict video quality. Nevertheless, this fusion framework does not fully exploit temporal video quality measurements which are relevant to temporal video distortions. To this end, we propose two improvements to the VMAF framework: SpatioTemporal VMAF and Ensemble VMAF. Both algorithms exploit efficient temporal video features which are fed into a single or multiple regression models. To train our models, we designed a large subjective database and evaluated the proposed models against state-of-the-art approaches. The compared algorithms will be made available as part of the open source package in https://github.com/Netflix/vmaf
    corecore