2 research outputs found

    Digit Image Recognition Using an Ensemble of One-Versus-All Deep Network Classifiers

    Full text link
    In multiclass deep network classifiers, the burden of classifying samples of different classes is put on a single classifier. As the result the optimum classification accuracy is not obtained. Also training times are large due to running the CNN training on single CPU/GPU. However it is known that using ensembles of classifiers increases the performance. Also, the training times can be reduced by running each member of the ensemble on a separate processor. Ensemble learning has been used in the past for traditional methods to a varying extent and is a hot topic. With the advent of deep learning, ensemble learning has been applied to the former as well. However, an area which is unexplored and has potential is One-Versus-All (OVA) deep ensemble learning. In this paper we explore it and show that by using OVA ensembles of deep networks, improvements in performance of deep networks can be obtained. As shown in this paper, the classification capability of deep networks can be further increased by using an ensemble of binary classification (OVA) deep networks. We implement a novel technique for the case of digit image recognition and test and evaluate it on the same. In the proposed approach, a single OVA deep network classifier is dedicated to each category. Subsequently, OVA deep network ensembles have been investigated. Every network in an ensemble has been trained by an OVA training technique using the Stochastic Gradient Descent with Momentum Algorithm (SGDMA). For classification of a test sample, the sample is presented to each network in the ensemble. After prediction score voting, the network with the largest score is assumed to have classified the sample. The experimentation has been done on the MNIST digit dataset, the USPS+ digit dataset, and MATLAB digit image dataset. Our proposed technique outperforms the baseline on digit image recognition for all datasets.Comment: ICTCS 2020 Camera Ready Pape

    Deep Q-Network Based Multi-agent Reinforcement Learning with Binary Action Agents

    Full text link
    Deep Q-Network (DQN) based multi-agent systems (MAS) for reinforcement learning (RL) use various schemes where in the agents have to learn and communicate. The learning is however specific to each agent and communication may be satisfactorily designed for the agents. As more complex Deep QNetworks come to the fore, the overall complexity of the multi-agent system increases leading to issues like difficulty in training, need for higher resources and more training time, difficulty in fine-tuning, etc. To address these issues we propose a simple but efficient DQN based MAS for RL which uses shared state and rewards, but agent-specific actions, for updation of the experience replay pool of the DQNs, where each agent is a DQN. The benefits of the approach are overall simplicity, faster convergence and better performance as compared to conventional DQN based approaches. It should be noted that the method can be extended to any DQN. As such we use simple DQN and DDQN (Double Q-learning) respectively on three separate tasks i.e. Cartpole-v1 (OpenAI Gym environment) , LunarLander-v2 (OpenAI Gym environment) and Maze Traversal (customized environment). The proposed approach outperforms the baseline on these tasks by decent margins respectively
    corecore