58 research outputs found

    Ensemble deep learning: A review

    Get PDF
    Ensemble learning combines several individual models to obtain better generalization performance. Currently, deep learning models with multilayer processing architecture is showing better performance as compared to the shallow or traditional classification models. Deep ensemble learning models combine the advantages of both the deep learning models as well as the ensemble learning such that the final model has better generalization performance. This paper reviews the state-of-art deep ensemble models and hence serves as an extensive summary for the researchers. The ensemble models are broadly categorised into ensemble models like bagging, boosting and stacking, negative correlation based deep ensemble models, explicit/implicit ensembles, homogeneous /heterogeneous ensemble, decision fusion strategies, unsupervised, semi-supervised, reinforcement learning and online/incremental, multilabel based deep ensemble models. Application of deep ensemble models in different domains is also briefly discussed. Finally, we conclude this paper with some future recommendations and research directions

    ANN-MIND : dropout for neural network training with missing data

    Get PDF
    M.Sc. (Computer Science)Abstract: It is a well-known fact that the quality of the dataset plays a central role in the results and conclusions drawn from the analysis of such a dataset. As the saying goes, ”garbage in, garbage out”. In recent years, neural networks have displayed good performance in solving a diverse number of problems. Unfortunately, neural networks are not immune to this misfortune presented by missing values. Furthermore, in most real-world settings, it is often the case that, the only data available for training neural networks consists of missing values. In such cases, we are left with little choice but to use this data for the purposes of training neural networks, although doing so may result in a poorly trained neural network. Most systems currently in use- merely discard the missing observation from the training datasets, while others just proceed to use this data and ignore the problems presented by the missing values. Still other approaches choose to impute these missing values with fixed constants such as means and mode. Most neural network models work under the assumption that the supplied data contains no missing values. This dissertation explores a method for training neural networks in the event where the training dataset consists of missing values..

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF
    corecore