1,451 research outputs found

    Unconstrained Face Verification using Deep CNN Features

    Full text link
    In this paper, we present an algorithm for unconstrained face verification based on deep convolutional features and evaluate it on the newly released IARPA Janus Benchmark A (IJB-A) dataset. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the traditional Labeled Face in the Wild (LFW) and Youtube Face (YTF) datasets. The deep convolutional neural network (DCNN) is trained using the CASIA-WebFace dataset. Extensive experiments on the IJB-A dataset are provided

    Deep convolutional neural network with 2D spectral energy maps for fault diagnosis of gearboxes under variable speed.

    Get PDF
    For industrial safety, correct classification of gearbox fault conditions is necessary. One of the most crucial tasks in data-driven fault diagnosis is determining the best set of features by analyzing the statistical parameters of the signals. However, under variable speed conditions, these statistical parameters are incapable of uncovering the dynamic characteristics of different fault conditions of gearboxes. Later, several deep learning algorithms are used to improve the performance of the feature selection process, but domain knowledge expertise is still necessary. In this paper, a combination domain knowledge analysis and a deep neural network is proposed. By using the input acoustic emission (AE) signal, a two-dimensional spectrum energy map (2D AE-SEM) is created to form an identical fault pattern for various speed conditions of gearboxes. Then, a deep convolutional neural network (DCNN) is proposed to investigate the detailed structure of the 2D input for final fault classification. This 2D AE-SEM offers a graphical depiction of acoustic emission spectral characteristics. Our proposed system offers vigorous and dynamic classification performance through the proposed DCNN with a high diagnostic fault classification accuracy of 96.37% in all considered scenarios

    Advertisement billboard detection and geotagging system with inductive transfer learning in deep convolutional neural network

    Get PDF
    In this paper, we propose an approach to detect and geotag advertisement billboard in real-time condition. Our approach is using AlexNet’s Deep Convolutional Neural Network (DCNN) as a pre-trained neural network with 1000 categories for image classification. To improve the performance of the pre-trained neural network, we retrain the network by adding more advertisement billboard images using inductive transfer learning approach. Then, we fine-tuned the output layer into advertisement billboard related categories. Furthermore, the detected advertisement billboard images will be geotagged by inserting Exif metadata into the image file. Experimental results show that the approach achieves 92.7% training accuracy for advertisement billboard detection, while for overall testing results it will give 71,86% testing accuracy

    Detection of microcalcifications in photon-counting dedicated breast-CT using a deep convolutional neural network: Proof of principle

    Full text link
    OBJECTIVE In this study, we investigate the feasibility of a deep Convolutional Neural Network (dCNN), trained with mammographic images, to detect and classify microcalcifications (MC) in breast-CT (BCT) images. METHODS This retrospective single-center study was approved by the local ethics committee. 3518 icons generated from 319 mammograms were classified into three classes: "no MC" (1121), "probably benign MC" (1332), and "suspicious MC" (1065). A dCNN was trained (70% of data), validated (20%), and tested on a "real-world" dataset (10%). The diagnostic performance of the dCNN was tested on a subset of 60 icons, generated from 30 mammograms and 30 breast-CT images, and compared to human reading. ROC analysis was used to calculate diagnostic performance. Moreover, colored probability maps for representative BCT images were calculated using a sliding-window approach. RESULTS The dCNN reached an accuracy of 98.8% on the "real-world" dataset. The accuracy on the subset of 60 icons was 100% for mammographic images, 60% for "no MC", 80% for "probably benign MC" and 100% for "suspicious MC". Intra-class correlation between the dCNN and the readers was almost perfect (0.85). Kappa values between the two readers (0.93) and the dCNN were almost perfect (reader 1: 0.85 and reader 2: 0.82). The sliding-window approach successfully detected suspicious MC with high image quality. The diagnostic performance of the dCNN to classify benign and suspicious MC was excellent with an AUC of 93.8% (95% CI 87, 4%-100%). CONCLUSION Deep convolutional networks can be used to detect and classify benign and suspicious MC in breast-CT images

    Automated pectoral muscle identification on MLOâ view mammograms: Comparison of deep neural network to conventional computer vision

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/149204/1/mp13451_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/149204/2/mp13451.pd
    corecore