10 research outputs found

    Shape recognition through multi-level fusion of features and classifiers

    Get PDF
    Shape recognition is a fundamental problem and a special type of image classification, where each shape is considered as a class. Current approaches to shape recognition mainly focus on designing low-level shape descriptors, and classify them using some machine learning approaches. In order to achieve effective learning of shape features, it is essential to ensure that a comprehensive set of high quality features can be extracted from the original shape data. Thus we have been motivated to develop methods of fusion of features and classifiers for advancing the classification performance. In this paper, we propose a multi-level framework for fusion of features and classifiers in the setting of gran-ular computing. The proposed framework involves creation of diversity among classifiers, through adopting feature selection and fusion to create diverse feature sets and to train diverse classifiers using different learn-Xinming Wang algorithms. The experimental results show that the proposed multi-level framework can effectively create diversity among classifiers leading to considerable advances in the classification performance

    Image processing system based on similarity/dissimilarity measures to classify binary images from contour-based features

    Get PDF
    Image Processing Systems (IPS) try to solve tasks like image classification or segmentation based on its content. Many authors proposed a variety of techniques to tackle the image classification task. Plenty of methods address the performance of the IPS [1], as long as the influence of many external circumstances, such as illumination, rotation, and noise [2]. However, there is an increasing interest in classifying shapes from binary images (BI). Shape Classification (SC) from BI considers a segmented image as a sample (backgroundsegmentation [3]) and aims to identify objects based in its shape..

    Image processing system based on similarity/dissimilarity measures to classify binary images from contour-based features

    Get PDF
    Image Processing Systems (IPS) try to solve tasks like image classification or segmentation based on its content. Many authors proposed a variety of techniques to tackle the image classification task. Plenty of methods address the performance of the IPS [1], as long as the influence of many external circumstances, such as illumination, rotation, and noise [2]. However, there is an increasing interest in classifying shapes from binary images (BI). Shape Classification (SC) from BI considers a segmented image as a sample (backgroundsegmentation [3]) and aims to identify objects based in its shape..

    Deep learning for animal recognition

    Get PDF
    Deep learning has obtained many successes in different computer vision tasks such as classification, detection, and segmentation of objects or faces. Many of these successes can be ascribed to training deep convolutional neural network architectures on a dataset containing many images. Limited research has explored deep learning methods for performing recognition or detection of animals using a limited number of images. This thesis examines the use of different deep learning techniques and conventional computer vision methods for performing animal recognition or detection with relatively small training datasets and has the following objectives: 1) Analyse the performance of deep learning systems compared to classical approaches when there exists a limited number of images of animals; 2) Develop an algorithm for effectively dealing with rotation variation naturally present in aerial images; 3) Construct a computer vision system that is more robust to illumination variation; 4) Analyse how important the use of different color spaces is in deep learning; 5) Compare different deep convolutional neural-network algorithms for detecting and recognizing individual instances (identities) in a group of animals, for example, badgers. For most of the experiments, effectively reduced neural network recognition systems are used, which are derived from existing architectures. These reduced systems are compared to standard architectures and classical computer vision methods. We also propose a color transformation algorithm, a novel rotation-matrix data-augmentation algorithm and a hybrid variant of such a method, that factors color constancy with the aim to enhance images and construct a system that is more robust to different kinds of visual appearances. The results show that our proposed algorithms aid deep learning systems to become more accurate in classifying animals for a large number of different animal datasets. Furthermore, the developed systems yield performances that significantly surpass classical computer vision techniques, even with limited amounts of available images for training
    corecore