28 research outputs found

    Automatic Glaucoma Detection by Using Funduscopic Images

    Get PDF
    This paper describes an automatic system to identify glaucoma disease from funduscopic images by using digital image processing. Glaucoma caused by increase of pressure in eye and damages in optic nerve. Glaucoma tends to be grown and may not show until final stage. Through this system, doctors can easily identify patient’s condition quickly and do treatment. Rural people also will get advantage through this system. Glaucoma is identified through cup to disc ratio (CDR) calculation and orientation of the blood vessels in this system. For that Optical disk’s inner circle (cup) and outer circle (disc) is extracted. From that radius is calculated. The outer and inner circles are extracted by using average and maximum grey level pixels respectively with the use of histogram. Then find contours and draw circle which is best fitting the contours. The radius of cup and disc are found. After calculating CDR, the abnormal image can be found if CDR exceeds a particular threshold value. Otherwise it is normal image. The system extracts the blood vessels and through the orientation of the blood vessel glaucoma is identified

    Automatic number plate recognition in low quality videos

    No full text
    Typical Automatic Number Plate Recognition (ANPR) system uses high resolution cameras to acquire good quality images of the vehicles passing through. In these images, license plates are localized, characters are segmented, and recognized to determine the identity of the vehicles. However, the steps in this workflow will fail to produce expected results in low resolution images and in a less constrained environment. Thus in this work, several improvements are made to this ANPR workflow by incorporating intelligent heuristics, image processing techniques and domain knowledge to build an ANPR system that is capable of identifying vehicles even in low resolution video frames. Main advantages of our system are that it is able to operate in real-time, does not rely on special hardware, and not constrained by environmental conditions. Low quality surveillance video data acquired from a toll system is used to evaluate the performance of our system. We were able to obtain more than 90% plate level recognition accuracy. The experiments with this dataset have shown that the system is robust to variations in illumination, view point, and scale

    SNIP: single-shot network pruning based on connection sensitivity

    No full text
    Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task

    SNIP: single-shot network pruning based on connection sensitivity

    No full text
    Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task

    Automatic number plate recognition in low quality videos

    No full text
    Typical Automatic Number Plate Recognition (ANPR) system uses high resolution cameras to acquire good quality images of the vehicles passing through. In these images, license plates are localized, characters are segmented, and recognized to determine the identity of the vehicles. However, the steps in this workflow will fail to produce expected results in low resolution images and in a less constrained environment. Thus in this work, several improvements are made to this ANPR workflow by incorporating intelligent heuristics, image processing techniques and domain knowledge to build an ANPR system that is capable of identifying vehicles even in low resolution video frames. Main advantages of our system are that it is able to operate in real-time, does not rely on special hardware, and not constrained by environmental conditions. Low quality surveillance video data acquired from a toll system is used to evaluate the performance of our system. We were able to obtain more than 90% plate level recognition accuracy. The experiments with this dataset have shown that the system is robust to variations in illumination, view point, and scale

    Automatic number plate recognition in low quality videos

    No full text
    Typical Automatic Number Plate Recognition (ANPR) system uses high resolution cameras to acquire good quality images of the vehicles passing through. In these images, license plates are localized, characters are segmented, and recognized to determine the identity of the vehicles. However, the steps in this workflow will fail to produce expected results in low resolution images and in a less constrained environment. Thus in this work, several improvements are made to this ANPR workflow by incorporating intelligent heuristics, image processing techniques and domain knowledge to build an ANPR system that is capable of identifying vehicles even in low resolution video frames. Main advantages of our system are that it is able to operate in real-time, does not rely on special hardware, and not constrained by environmental conditions. Low quality surveillance video data acquired from a toll system is used to evaluate the performance of our system. We were able to obtain more than 90% plate level recognition accuracy. The experiments with this dataset have shown that the system is robust to variations in illumination, view point, and scale

    Data parallelism in training sparse neural networks

    No full text
    Network pruning is an effective methodology to compress large neural networks, and sparse neural networks obtained by pruning can benefit from their reduced memory and computational costs at use. Notably, recent advances have found that it is possible to find a trainable sparse neural network even at random initialization prior to training; hence the obtained sparse network only needs to be trained. While this approach of pruning at initialization turned out to be highly effective, little has been studied about the training aspects of these sparse neural networks. In this work, we focus on measuring the effects of data parallelism on training sparse neural networks. As a result, we find that the data parallelism in training sparse neural networks is no worse than that in training densely parameterized neural networks, despite the general difficulty of training sparse neural networks. When training sparse networks using SGD with momentum, the breakdown of the perfect scaling regime occurs even much later than the dense at large batch sizes

    Riemannian walk for incremental learning: Understanding forgetting and intransigence

    No full text
    Incremental learning (IL) has received a lot of attention recently, however, the literature lacks a precise problem definition, proper evaluation settings, and metrics tailored specifically for the IL problem. One of the main objectives of this work is to fill these gaps so as to provide a common ground for better understanding of IL. The main challenge for an IL algorithm is to update the classifier whilst preserving existing knowledge. We observe that, in addition to forgetting, a known issue while preserving knowledge, IL also suffers from a problem we call intransigence, its inability to update knowledge. We introduce two metrics to quantify forgetting and intransigence that allow us to understand, analyse, and gain better insights into the behaviour of IL algorithms. Furthermore, we present RWalk, a generalization of EWC++ (our efficient version of EWC [6]) and Path Integral [25] with a theoretically grounded KL-divergence based perspective. We provide a thorough analysis of various IL algorithms on MNIST and CIFAR-100 datasets. In these experiments, RWalk obtains superior results in terms of accuracy, and also provides a better trade-off for forgetting and intransigence

    Understanding the effects of data parallelism and sparsity on neural network training

    No full text
    We study two factors in neural network training: data parallelism and sparsity; here, data parallelism means processing training data in parallel using distributed systems (or equivalently increasing batch size), so that training can be accelerated; for sparsity, we refer to pruning parameters in a neural network model, so as to reduce computational and memory cost. Despite their promising benefits, however, understanding of their effects on neural network training remains elusive. In this work, we first measure these effects rigorously by conducting extensive experiments while tuning all metaparameters involved in the optimization. As a result, we find across various workloads of data set, network model, and optimization algorithm that there exists a general scaling trend between batch size and number of training steps to convergence for the effect of data parallelism, and further, difficulty of training under sparsity. Then, we develop a theoretical analysis based on the convergence properties of stochastic gradient methods and smoothness of the optimization landscape, which illustrates the observed phenomena precisely and generally, establishing a better account of the effects of data parallelism and sparsity on neural network training

    Proximal mean-field for neural network quantization

    No full text
    Compressing large Neural Networks (NN) by quantizing the parameters, while maintaining the performance is highly desirable due to reduced memory and time complexity. In this work, we cast NN quantization as a discrete labelling problem, and by examining relaxations, we design an efficient iterative optimization procedure that involves stochastic gradient descent followed by a projection. We prove that our simple projected gradient descent approach is, in fact, equivalent to a proximal version of the well-known mean-field method. These findings would allow the decades-old and theoretically grounded research on MRF optimization to be used to design better network quantization schemes. Our experiments on standard classification datasets (MNIST, CIFAR10/100, TinyImageNet) with convolutional and residual architectures show that our algorithm obtains fully-quantized networks with accuracies very close to the floating-point reference networks
    corecore