1,152 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationScene labeling is the problem of assigning an object label to each pixel of a given image. It is the primary step towards image understanding and unifies object recognition and image segmentation in a single framework. A perfect scene labeling framework detects and densely labels every region and every object that exists in an image. This task is of substantial importance in a wide range of applications in computer vision. Contextual information plays an important role in scene labeling frameworks. A contextual model utilizes the relationships among the objects in a scene to facilitate object detection and image segmentation. Using contextual information in an effective way is one of the main questions that should be answered in any scene labeling framework. In this dissertation, we develop two scene labeling frameworks that rely heavily on contextual information to improve the performance over state-of-the-art methods. The first model, called the multiclass multiscale contextual model (MCMS), uses contextual information from multiple objects and at different scales for learning discriminative models in a supervised setting. The MCMS model incorporates crossobject and interobject information into one probabilistic framework, and thus is able to capture geometrical relationships and dependencies among multiple objects in addition to local information from each single object present in an image. The second model, called the contextual hierarchical model (CHM), learns contextual information in a hierarchy for scene labeling. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. The CHM then incorporates the resulting multiresolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. We demonstrate the performance of CHM on different challenging tasks such as outdoor scene labeling and edge detection in natural images and membrane detection in electron microscopy images. We also introduce two novel classification methods. WNS-AdaBoost speeds up the training of AdaBoost by providing a compact representation of a training set. Disjunctive normal random forest (DNRF) is an ensemble method that is able to learn complex decision boundaries and achieves low generalization error by optimizing a single objective function for each weak classifier in the ensemble. Finally, a segmentation framework is introduced that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy images

    A new approach to image classification based on a deep multiclass AdaBoosting ensemble

    Get PDF
    In recent years, deep learning methods have been developed in order to solve the problems. These methods were effective in solving complex problems. Convolution is one of the learning methods. This method is applied in classifying and processing of images as well. Hybrid methods are another multi-component machine learning method. These methods are categorized into independent and dependent types. Ada-Boosting algorithm is one of these methods. Today, the classification of images has many applications. So far, several algorithms have been presented for binary and multi-class classification. Most of the above-mentioned methods have a high dependence on the data. The present study intends to use a combination of deep learning methods and associated hybrid methods to classify the images. It is presumed that this method is able to reduce the error rate in images classification. The proposed algorithm consists of the Ada-Boosting hybrid method and bi-layer convolutional learning method. The proposed method was analyzed after it was implemented on a multi-class Mnist data set and displayed the result of the error rate reduction. The results of this study indicate that the error rate of the proposed method is less than Ada-Boosting and convolution methods. Also, the network has more stability compared to the other methods

    Variable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applications

    Get PDF
    Food authenticity studies are concerned with determining if food samples have been correctly labelled or not. Discriminant analysis methods are an integral part of the methodology for food authentication. Motivated by food authenticity applications, a model-based discriminant analysis method that includes variable selection is presented. The discriminant analysis model is fitted in a semi-supervised manner using both labeled and unlabeled data. The method is shown to give excellent classification performance on several high-dimensional multiclass food authenticity datasets with more variables than observations. The variables selected by the proposed method provide information about which variables are meaningful for classification purposes. A headlong search strategy for variable selection is shown to be efficient in terms of computation and achieves excellent classification performance. In applications to several food authenticity datasets, our proposed method outperformed default implementations of Random Forests, AdaBoost, transductive SVMs and Bayesian Multinomial Regression by substantial margins

    The Superiority of the Ensemble Classification Methods: A Comprehensive Review

    Get PDF
    The modern technologies, which are characterized by cyber-physical systems and internet of things expose organizations to big data, which in turn can be processed to derive actionable knowledge. Machine learning techniques have vastly been employed in both supervised and unsupervised environments in an effort to develop systems that are capable of making feasible decisions in light of past data. In order to enhance the accuracy of supervised learning algorithms, various classification-based ensemble methods have been developed. Herein, we review the superiority exhibited by ensemble learning algorithms based on the past that has been carried out over the years. Moreover, we proceed to compare and discuss the common classification-based ensemble methods, with an emphasis on the boosting and bagging ensemble-learning models. We conclude by out setting the superiority of the ensemble learning models over individual base learners. Keywords: Ensemble, supervised learning, Ensemble model, AdaBoost, Bagging, Randomization, Boosting, Strong learner, Weak learner, classifier fusion, classifier selection, Classifier combination. DOI: 10.7176/JIEA/9-5-05 Publication date: August 31st 2019

    Proposing a new method of image classification based on the AdaBoost deep belief network hybrid method

    Get PDF
    Image classification has different applications. Up to now, various algorithms have been presented for image classification. Each of these method has its own weaknesses and strengths. Reducing error rate is an issue which much researches have been carried out about it. This research intends to optimize the problem with hybrid methods and deep learning. The hybrid methods were developed to improve the results of the single-component methods. On the other hand, a deep belief network (DBN) is a generative probabilistic modelwith multiple layers of latent variables and is used to solve the unlabeled problems. In fact, this method is anunsupervised method, in which all layers are one-way directed layers except for the last layer. So far, various methods have been proposed for image classification, and the goal of this research project was to use a combination of the AdaBoost method and the deep belief network method to classify images. The other objective was to obtain better results than the previous results. In this project, a combination of the deep belief network and AdaBoost method was used to boost learning and the network potential was enhanced by making the entire network recursive. This method was tested on the MINIST dataset and the results were indicative of a decrease in the error rate with the proposed method as compared to the AdaBoost and deep belief network methods.
    • ā€¦
    corecore