81,268 research outputs found

    Complex Network Classification with Convolutional Neural Network

    Full text link
    Classifying large scale networks into several categories and distinguishing them according to their fine structures is of great importance with several applications in real life. However, most studies of complex networks focus on properties of a single network but seldom on classification, clustering, and comparison between different networks, in which the network is treated as a whole. Due to the non-Euclidean properties of the data, conventional methods can hardly be applied on networks directly. In this paper, we propose a novel framework of complex network classifier (CNC) by integrating network embedding and convolutional neural network to tackle the problem of network classification. By training the classifiers on synthetic complex network data and real international trade network data, we show CNC can not only classify networks in a high accuracy and robustness, it can also extract the features of the networks automatically

    Performance evaluation of transfer learning based deep convolutional neural network with limited fused spectro-temporal data for land cover classification

    Get PDF
    Deep learning (DL) techniques are effective in various applications, such as parameter estimation, image classification, recognition, and anomaly detection. They excel with abundant training data but struggle with limited data. To overcome this, transfer learning is commonly used, leveraging complex learning abilities, saving time, and handling limited labeled data. This study assesses a transfer learning (TL)-based pre-trained “deep convolutional neural network (DCNN)” for classifying land use land cover using a limited and imbalanced dataset of fused spectro-temporal data. It compares the performance of shallow artificial neural networks (ANNs) and deep convolutional neural networks, utilizing multi-spectral sentinel-2 and high-resolution planet scope data. Both machine learning and deep learning algorithms successfully classified the fused data, but the transfer learning-based deep convolutional neural network outperformed the artificial neural network. The evaluation considered a weighted average of F1-score and overall classification accuracy. The transfer learning-based convolutional neural network achieved a weighted average F1-score of 0.92 and a classification accuracy of 0.93, while the artificial neural network achieved a weighted average F1-score of 0.87 and a classification accuracy of 0.89. These results highlight the superior performance of the transfer learned convolutional neural network on a limited and imbalanced dataset compared to the traditional artificial neural network algorithm

    Multilayer Complex Network Descriptors for Color-Texture Characterization

    Full text link
    A new method based on complex networks is proposed for color-texture analysis. The proposal consists on modeling the image as a multilayer complex network where each color channel is a layer, and each pixel (in each color channel) is represented as a network vertex. The network dynamic evolution is accessed using a set of modeling parameters (radii and thresholds), and new characterization techniques are introduced to capt information regarding within and between color channel spatial interaction. An automatic and adaptive approach for threshold selection is also proposed. We conduct classification experiments on 5 well-known datasets: Vistex, Usptex, Outex13, CURet and MBT. Results among various literature methods are compared, including deep convolutional neural networks with pre-trained architectures. The proposed method presented the highest overall performance over the 5 datasets, with 97.7 of mean accuracy against 97.0 achieved by the ResNet convolutional neural network with 50 layers.Comment: 20 pages, 7 figures and 4 table

    Convolutional Drift Networks for Video Classification

    Full text link
    Analyzing spatio-temporal data like video is a challenging task that requires processing visual and temporal information effectively. Convolutional Neural Networks have shown promise as baseline fixed feature extractors through transfer learning, a technique that helps minimize the training cost on visual information. Temporal information is often handled using hand-crafted features or Recurrent Neural Networks, but this can be overly specific or prohibitively complex. Building a fully trainable system that can efficiently analyze spatio-temporal data without hand-crafted features or complex training is an open challenge. We present a new neural network architecture to address this challenge, the Convolutional Drift Network (CDN). Our CDN architecture combines the visual feature extraction power of deep Convolutional Neural Networks with the intrinsically efficient temporal processing provided by Reservoir Computing. In this introductory paper on the CDN, we provide a very simple baseline implementation tested on two egocentric (first-person) video activity datasets.We achieve video-level activity classification results on-par with state-of-the art methods. Notably, performance on this complex spatio-temporal task was produced by only training a single feed-forward layer in the CDN.Comment: Published in IEEE Rebooting Computin

    VideoCapsuleNet: A Simplified Network for Action Detection

    Full text link
    The recent advances in Deep Convolutional Neural Networks (DCNNs) have shown extremely good results for video human action classification, however, action detection is still a challenging problem. The current action detection approaches follow a complex pipeline which involves multiple tasks such as tube proposals, optical flow, and tube classification. In this work, we present a more elegant solution for action detection based on the recently developed capsule network. We propose a 3D capsule network for videos, called VideoCapsuleNet: a unified network for action detection which can jointly perform pixel-wise action segmentation along with action classification. The proposed network is a generalization of capsule network from 2D to 3D, which takes a sequence of video frames as input. The 3D generalization drastically increases the number of capsules in the network, making capsule routing computationally expensive. We introduce capsule-pooling in the convolutional capsule layer to address this issue which makes the voting algorithm tractable. The routing-by-agreement in the network inherently models the action representations and various action characteristics are captured by the predicted capsules. This inspired us to utilize the capsules for action localization and the class-specific capsules predicted by the network are used to determine a pixel-wise localization of actions. The localization is further improved by parameterized skip connections with the convolutional capsule layers and the network is trained end-to-end with a classification as well as localization loss. The proposed network achieves sate-of-the-art performance on multiple action detection datasets including UCF-Sports, J-HMDB, and UCF-101 (24 classes) with an impressive ~20% improvement on UCF-101 and ~15% improvement on J-HMDB in terms of v-mAP scores

    A Crop Pests Image Classification Algorithm Based on Deep Convolutional Neural Network

    Get PDF
    Conventional pests image classification methods may not be accurate due to the complex farmland background, sunlight and pest gestures. To raise the accuracy, the deep convolutional neural network (DCNN), a concept from Deep Learning, was used in this study to classify crop pests image. On the ground of our experiments, in which LeNet-5 and AlexNet were used to classify pests image, we have analyzed the effects of both convolution kernel and the number of layers on the network, and redesigned the structure of convolutional neural network for crop pests. Further more, 82 common pest types have been classified, with the accuracy reaching 91%. The comparison to conventional classification methods proves that our method is not only feasible but preeminent

    Single-epoch supernova classification with deep convolutional neural networks

    Full text link
    Supernovae Type-Ia (SNeIa) play a significant role in exploring the history of the expansion of the Universe, since they are the best-known standard candles with which we can accurately measure the distance to the objects. Finding large samples of SNeIa and investigating their detailed characteristics have become an important issue in cosmology and astronomy. Existing methods relied on a photometric approach that first measures the luminance of supernova candidates precisely and then fits the results to a parametric function of temporal changes in luminance. However, it inevitably requires multi-epoch observations and complex luminance measurements. In this work, we present a novel method for classifying SNeIa simply from single-epoch observation images without any complex measurements, by effectively integrating the state-of-the-art computer vision methodology into the standard photometric approach. Our method first builds a convolutional neural network for estimating the luminance of supernovae from telescope images, and then constructs another neural network for the classification, where the estimated luminance and observation dates are used as features for classification. Both of the neural networks are integrated into a single deep neural network to classify SNeIa directly from observation images. Experimental results show the effectiveness of the proposed method and reveal classification performance comparable to existing photometric methods with multi-epoch observations.Comment: 7 pages, published as a workshop paper in ICDCS2017, in June 201
    corecore