419 research outputs found

    Automatic Screening and Classification of Diabetic Retinopathy Eye Fundus Image

    Get PDF
    Diabetic Retinopathy (DR) is a disorder of the retinal vasculature. It develops to some degree in nearly all patients with long-standing diabetes mellitus and can result in blindness. Screening of DR is essential for both early detection and early treatment. This thesis aims to investigate automatic methods for diabetic retinopathy detection and subsequently develop an effective system for the detection and screening of diabetic retinopathy. The presented diabetic retinopathy research involves three development stages. Firstly, the thesis presents the development of a preliminary classification and screening system for diabetic retinopathy using eye fundus images. The research will then focus on the detection of the earliest signs of diabetic retinopathy, which are the microaneurysms. The detection of microaneurysms at an early stage is vital and is the first step in preventing diabetic retinopathy. Finally, the thesis will present decision support systems for the detection of diabetic retinopathy and maculopathy in eye fundus images. The detection of maculopathy, which are yellow lesions near the macula, is essential as it will eventually cause the loss of vision if the affected macula is not treated in time. An accurate retinal screening, therefore, is required to assist the retinal screeners to classify the retinal images effectively. Highly efficient and accurate image processing techniques must thus be used in order to produce an effective screening of diabetic retinopathy. In addition to the proposed diabetic retinopathy detection systems, this thesis will present a new dataset, and will highlight the dataset collection, the expert diagnosis process and the advantages of the new dataset, compared to other public eye fundus images datasets available. The new dataset will be useful to researchers and practitioners working in the retinal imaging area and would widely encourage comparative studies in the field of diabetic retinopathy research. It is envisaged that the proposed decision support system for clinical screening would greatly contribute to and assist the management and the detection of diabetic retinopathy. It is also hoped that the developed automatic detection techniques will assist clinicians to diagnose diabetic retinopathy at an early stage

    Automatic detection of microaneurysms in colour fundus images for diabetic retinopathy screening.

    Get PDF
    Regular eye screening is essential for the early detection and treatment of the diabetic retinopathy. This paper presents a novel automatic screening system for diabetic retinopathy that focuses on the detection of the earliest visible signs of retinopathy, which are microaneurysms. Microaneurysms are small dots on the retina, formed by ballooning out of a weak part of the capillary wall. The detection of the microaneurysms at an early stage is vital, and it is the first step in preventing the diabetic retinopathy. The paper first explores the existing systems and applications related to diabetic retinopathy screening, with a focus on the microaneurysm detection methods. The proposed decision support system consists of an automatic acquisition, screening and classification of diabetic retinopathy colour fundus images, which could assist in the detection and management of the diabetic retinopathy. Several feature extraction methods and the circular Hough transform have been employed in the proposed microaneurysm detection system, alongside the fuzzy histogram equalisation method. The latter method has been applied in the preprocessing stage of the diabetic retinopathy eye fundus images and provided improved results for detecting the microaneurysms

    Review of Person Re-identification Techniques

    Full text link
    Person re-identification across different surveillance cameras with disjoint fields of view has become one of the most interesting and challenging subjects in the area of intelligent video surveillance. Although several methods have been developed and proposed, certain limitations and unresolved issues remain. In all of the existing re-identification approaches, feature vectors are extracted from segmented still images or video frames. Different similarity or dissimilarity measures have been applied to these vectors. Some methods have used simple constant metrics, whereas others have utilised models to obtain optimised metrics. Some have created models based on local colour or texture information, and others have built models based on the gait of people. In general, the main objective of all these approaches is to achieve a higher-accuracy rate and lowercomputational costs. This study summarises several developments in recent literature and discusses the various available methods used in person re-identification. Specifically, their advantages and disadvantages are mentioned and compared.Comment: Published 201

    A Review on Detection of Medical Plant Images

    Get PDF
    Both human and non-human life on Earth depends heavily on plants. The natural cycle is most significantly influenced by plants. Because of the sophistication of recent plant discoveries and the computerization of plants, plant identification is particularly challenging in biology and agriculture. There are a variety of reasons why automatic plant classification systems must be put into place, including instruction, resource evaluation, and environmental protection. It is thought that the leaves of medicinal plants are what distinguishes them. It is an interesting goal to identify the species of plant automatically using the photo identity of their leaves because taxonomists are undertrained and biodiversity is quickly vanishing in the current environment. Due to the need for mass production, these plants must be identified immediately. The physical and emotional health of people must be taken into consideration when developing drugs. To important processing of medical herbs is to identify and classify. Since there aren't many specialists in this field, it might be difficult to correctly identify and categorize medicinal plants. Therefore, a fully automated approach is optimal for identifying medicinal plants. The numerous means for categorizing medicinal plants that take into interpretation based on the silhouette and roughness of a plant's leaf are briefly précised in this article

    On Box-Cox Transformation for Image Normality and Pattern Classification

    Full text link
    A unique member of the power transformation family is known as the Box-Cox transformation. The latter can be seen as a mathematical operation that leads to finding the optimum lambda ({\lambda}) value that maximizes the log-likelihood function to transform a data to a normal distribution and to reduce heteroscedasticity. In data analytics, a normality assumption underlies a variety of statistical test models. This technique, however, is best known in statistical analysis to handle one-dimensional data. Herein, this paper revolves around the utility of such a tool as a pre-processing step to transform two-dimensional data, namely, digital images and to study its effect. Moreover, to reduce time complexity, it suffices to estimate the parameter lambda in real-time for large two-dimensional matrices by merely considering their probability density function as a statistical inference of the underlying data distribution. We compare the effect of this light-weight Box-Cox transformation with well-established state-of-the-art low light image enhancement techniques. We also demonstrate the effectiveness of our approach through several test-bed data sets for generic improvement of visual appearance of images and for ameliorating the performance of a colour pattern classification algorithm as an example application. Results with and without the proposed approach, are compared using the AlexNet (transfer deep learning) pretrained model. To the best of our knowledge, this is the first time that the Box-Cox transformation is extended to digital images by exploiting histogram transformation.Comment: The paper has 4 Tables and 6 Figure

    Artificial Neural Network Based Channel Equalization

    Get PDF
    The field of digital data communications has experienced an explosive growth in the last three decade with the growth of internet technologies, high speed and efficient data transmission over communication channel has gained significant importance. The rate of data transmissions over a communication system is limited due to the effects of linear and nonlinear distortion. Linear distortions occure in from of inter-symbol interference (ISI), co-channel interference (CCI) and adjacent channel interference (ACI) in the presence of additive white Gaussian noise. Nonlinear distortions are caused due to the subsystems like amplifiers, modulator and demodulator along with nature of the medium. Some times burst noise occurs in communication system. Different equalization techniques are used to mitigate these effects. Adaptive channel equalizers are used in digital communication systems. The equalizer located at the receiver removes the effects of ISI, CCI, burst noise interference and attempts to recover the transmitted symbols. It has been seen that linear equalizers show poor performance, where as nonlinear equalizer provide superior performance. Artificial neural network based multi layer perceptron (MLP) based equalizers have been used for equalization in the last two decade. The equalizer is a feed-forward network consists of one or more hidden nodes between its input and output layers and is trained by popular error based back propagation (BP) algorithm. However this algorithm suffers from slow convergence rate, depending on the size of network. It has been seen that an optimal equalizer based on maximum a-posterior probability (MAP) criterion can be implemented using Radial basis function (RBF) network. In a RBF equalizer, centres are fixed using K-mean clustering and weights are trained using LMS algorithm. RBF equalizer can mitigate ISI interference effectively providing minimum BER plot. But when the input order is increased the number of centre of the network increases and makes the network more complicated. A RBF network, to mitigate the effects of CCI is very complex with large number of centres. To overcome computational complexity issues, a single neuron based chebyshev neural network (ChNN) and functional link ANN (FLANN) have been proposed. These neural networks are single layer network in which the original input pattern is expanded to a higher dimensional space using nonlinear functions and have capability to provide arbitrarily complex decision regions. More recently, a rank based statistics approach known as Wilcoxon learning method has been proposed for signal processing application. The Wilcoxon learning algorithm has been applied to neural networks like Wilcoxon Multilayer Perceptron Neural Network (WMLPNN), Wilcoxon Generalized Radial Basis Function Network (WGRBF). The Wilcoxon approach provides promising methodology for many machine learning problems. This motivated us to introduce these networks in the field of channel equalization application. In this thesis we have used WMLPNN and WGRBF network to mitigate ISI, CCI and burst noise interference. It is observed that the equalizers trained with Wilcoxon learning algorithm offers improved performance in terms of convergence characteristic and bit error rate performance in comparison to gradient based training for MLP and RBF. Extensive simulation studies have been carried out to validate the proposed technique. The performance of Wilcoxon networks is better then linear equalizers trained with LMS and RLS algorithm and RBF equalizer in the case of burst noise and CCI mitigations

    Adaptive Equalisation of Communication Channels Using ANN Techniques

    Get PDF
    Channel equalisation is a process of compensating the disruptive effects caused mainly by Inter Symbol Interference in a band-limited channel and plays a vital role for enabling higher data rate in digital communication. The development of new training algorithms, structures and the selection of the design parameters for equalisers are active fields of research which are exploiting the benefits of different signal processing techniques. Designing efficient equalisers based on low structural complexity, is also an area of much interest keeping in view of real-time implementation issue. However, it has been widely reported that optimal performance can only be realised using nonlinear equalisers. As Artificial Neural Networks are inherently nonlinear processing elements and possess capabilities of universal approximation and pattern classification, these are well suited for developing high performance adaptive equalisers. This proposed work has significantly contributed to the d..

    Image Automatic Categorisation using Selected Features Attained from Integrated Non-Subsampled Contourlet with Multiphase Level Sets

    Get PDF
    A framework of automatic detection and categorization of Breast Cancer (BC) biopsy images utilizing significant interpretable features is initially considered in discussed work. Appropriate efficient techniques are engaged in layout steps of the discussed framework. Different steps include 1.To emphasize the edge particulars of tissue structure; the distinguished Non-Subsampled Contourlet (NSC) transform is implemented. 2. For the demarcation of cells from background, k-means, Adaptive Size Marker Controlled Watershed, two proposed integrated methodologies were discussed. Proposed Method-II, an integrated approach of NSC and Multiphase Level Sets is preferred to other segmentation practices as it proves better performance 3. In feature extraction phase, extracted 13 shape morphology, 33 textural (includes 6 histogram, 22 Haralick’s, 3 Tamura’s, 2 Graylevel Run-Length Matrix,) and 2 intensity features from partitioned tissue images for 96 trained image
    corecore