38 research outputs found

    Brain Abnormality Detection by Deep Convolutional Neural Network

    Full text link
    In this paper, we describe our method for classification of brain magnetic resonance (MR) images into different abnormalities and healthy classes based on the deep neural network. We propose our method to detect high and low-grade glioma, multiple sclerosis, and Alzheimer diseases as well as healthy cases. Our network architecture has ten learning layers that include seven convolutional layers and three fully connected layers. We have achieved a promising result in five categories of brain images (classification task) with 95.7% accuracy.Comment: Accepted for presenting in ACM-womENcourage_201

    STN-OCR: A single Neural Network for Text Detection and Text Recognition

    Full text link
    Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In re- cent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition, that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network, that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.Comment: 9 pages, 6 figure

    Deep Learning for Medical Image Analysis

    Full text link
    This report describes my research activities in the Hasso Plattner Institute and summarizes my Ph.D. plan and several novels, end-to-end trainable approaches for analyzing medical images using deep learning algorithm. In this report, as an example, we explore different novel methods based on deep learning for brain abnormality detection, recognition, and segmentation. This report prepared for the doctoral consortium in the AIME-2017 conference.Comment: Presented in doctoral consortium in the AIME-2017 conferenc

    Conditional Generative Refinement Adversarial Networks for Unbalanced Medical Image Semantic Segmentation

    Full text link
    We propose a new generative adversarial architecture to mitigate imbalance data problem in medical image semantic segmentation where the majority of pixels belongs to a healthy region and few belong to lesion or non-health region. A model trained with imbalanced data tends to bias toward healthy data which is not desired in clinical applications and predicted outputs by these networks have high precision and low sensitivity. We propose a new conditional generative refinement network with three components: a generative, a discriminative, and a refinement network to mitigate unbalanced data problem through ensemble learning. The generative network learns to a segment at the pixel level by getting feedback from the discriminative network according to the true positive and true negative maps. On the other hand, the refinement network learns to predict the false positive and the false negative masks produced by the generative network that has significant value, especially in medical application. The final semantic segmentation masks are then composed by the output of the three networks. The proposed architecture shows state-of-the-art results on LiTS-2017 for liver lesion segmentation, and two microscopic cell segmentation datasets MDA231, PhC-HeLa. We have achieved competitive results on BraTS-2017 for brain tumour segmentation

    SEE: Towards Semi-Supervised End-to-End Scene Text Recognition

    Full text link
    Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results.Comment: AAAI-18. arXiv admin note: substantial text overlap with arXiv:1707.0883

    Learning to Train a Binary Neural Network

    Full text link
    Convolutional neural networks have achieved astonishing results in different application areas. Various methods which allow us to use these models on mobile and embedded devices have been proposed. Especially binary neural networks seem to be a promising approach for these devices with low computational power. However, understanding binary neural networks and training accurate models for practical applications remains a challenge. In our work, we focus on increasing our understanding of the training process and making it accessible to everyone. We publish our code and models based on BMXNet for everyone to use. Within this framework, we systematically evaluated different network architectures and hyperparameters to provide useful insights on how to train a binary neural network. Further, we present how we improved accuracy by increasing the number of connections in the network.Comment: Code: https://github.com/Jopyth/BMXNe

    BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet

    Full text link
    Binary Neural Networks (BNNs) can drastically reduce memory size and accesses by applying bit-wise operations instead of standard arithmetic operations. Therefore it could significantly improve the efficiency and lower the energy consumption at runtime, which enables the application of state-of-the-art deep learning models on low power devices. BMXNet is an open-source BNN library based on MXNet, which supports both XNOR-Networks and Quantized Neural Networks. The developed BNN layers can be seamlessly applied with other standard library components and work in both GPU and CPU mode. BMXNet is maintained and developed by the multimedia research group at Hasso Plattner Institute and released under Apache license. Extensive experiments validate the efficiency and effectiveness of our implementation. The BMXNet library, several sample projects, and a collection of pre-trained binary deep models are available for download at https://github.com/hpi-xnorComment: 4 page

    Language Identification Using Deep Convolutional Recurrent Neural Networks

    Full text link
    Language Identification (LID) systems are used to classify the spoken language from a given audio sample and are typically the first step for many spoken language processing tasks, such as Automatic Speech Recognition (ASR) systems. Without automatic language detection, speech utterances cannot be parsed correctly and grammar rules cannot be applied, causing subsequent speech recognition steps to fail. We propose a LID system that solves the problem in the image domain, rather than the audio domain. We use a hybrid Convolutional Recurrent Neural Network (CRNN) that operates on spectrogram images of the provided audio snippets. In extensive experiments we show, that our model is applicable to a range of noisy scenarios and can easily be extended to previously unknown languages, while maintaining its classification accuracy. We release our code and a large scale training set for LID systems to the community.Comment: to be presented at ICONIP 201

    microbatchGAN: Stimulating Diversity with Multi-Adversarial Discrimination

    Full text link
    We propose to tackle the mode collapse problem in generative adversarial networks (GANs) by using multiple discriminators and assigning a different portion of each minibatch, called microbatch, to each discriminator. We gradually change each discriminator's task from distinguishing between real and fake samples to discriminating samples coming from inside or outside its assigned microbatch by using a diversity parameter α\alpha. The generator is then forced to promote variety in each minibatch to make the microbatch discrimination harder to achieve by each discriminator. Thus, all models in our framework benefit from having variety in the generated set to reduce their respective losses. We show evidence that our solution promotes sample diversity since early training stages on multiple datasets.Comment: WACV 202

    Multi-Task Generative Adversarial Network for Handling Imbalanced Clinical Data

    Full text link
    We propose a new generative adversarial architecture to mitigate imbalance data problem for the task of medical image semantic segmentation where the majority of pixels belong to a healthy region and few belong to lesion or non-health region. A model trained with imbalanced data tends to bias towards healthy data which is not desired in clinical applications. We design a new conditional GAN with two components: a generative model and a discriminative model to mitigate imbalanced data problem through selective weighted loss. While the generator is trained on sequential magnetic resonance images (MRI) to learn semantic segmentation and disease classification, the discriminator classifies whether a generated output is real or fake. The proposed architecture achieved state-of-the-art results on ACDC-2017 for cardiac segmentation and diseases classification. We have achieved competitive results on BraTS-2017 for brain tumor segmentation and brain diseases classification.Comment: Machine Learning for Health (ML4H) Workshop at NeurIPS 2018 arXiv:1811.07216. arXiv admin note: text overlap with arXiv:1810.0387
    corecore