9 research outputs found

    A Review: Person Identification using Retinal Fundus Images

    Get PDF
    In this paper a review on biometric person identification has been discussed using features from retinal fundus image. Retina recognition is claimed to be the best person identification method among the biometric recognition systems as the retina is practically impossible to forge. It is found to be most stable, reliable and most secure among all other biometric systems. Retina inherits the property of uniqueness and stability. The features used in the recognition process are either blood vessel features or non-blood vessel features. But the vascular pattern is the most prominent feature utilized by most of the researchers for retina based person identification. Processes involved in this authentication system include pre-processing, feature extraction and feature matching. Bifurcation and crossover points are widely used features among the blood vessel features. Non-blood vessel features include luminance, contrast, and corner points etc. This paper summarizes and compares the different retina based authentication system. Researchers have used publicly available databases such as DRIVE, STARE, VARIA, RIDB, ARIA, AFIO, DRIDB, and SiMES for testing their methods. Various quantitative measures such as accuracy, recognition rate, false rejection rate, false acceptance rate, and equal error rate are used to evaluate the performance of different algorithms. DRIVE database provides 100\% recognition for most of the methods. Rest of the database the accuracy of recognition is more than 90\%

    A Review: Person Identification using Retinal Fundus Images

    Get PDF
    In this paper a review on biometric person identification has been discussed using features from retinal fundus image. Retina recognition is claimed to be the best person identification method among the biometric recognition systems as the retina is practically impossible to forge. It is found to be most stable, reliable and most secure among all other biometric systems. Retina inherits the property of uniqueness and stability. The features used in the recognition process are either blood vessel features or non-blood vessel features. But the vascular pattern is the most prominent feature utilized by most of the researchers for retina based person identification. Processes involved in this authentication system include pre-processing, feature extraction and feature matching. Bifurcation and crossover points are widely used features among the blood vessel features. Non-blood vessel features include luminance, contrast, and corner points etc. This paper summarizes and compares the different retina based authentication system. Researchers have used publicly available databases such as DRIVE, STARE, VARIA, RIDB, ARIA, AFIO, DRIDB, and SiMES for testing their methods. Various quantitative measures such as accuracy, recognition rate, false rejection rate, false acceptance rate, and equal error rate are used to evaluate the performance of different algorithms. DRIVE database provides 100\% recognition for most of the methods. Rest of the database the accuracy of recognition is more than 90\%

    Diagnosis of Retinitis Pigmentosa from Retinal Images

    Get PDF
    Retinitis pigmentosa is a genetic disorder that results in nyctalopia and its progression leads to complete loss of vision. The analysis and the study of retinal images are necessary, so as to help ophthalmologist in early detection of the retinitis pigmentosa. In this paper fundus images and Optical Coherence Tomography images are comprehensively analyzed, so as to obtain the various morphological features that characterize the retinitis pigmentosa. Pigment Deposits, important trait of RP is investigated. Degree of darkness and entropy are the features used for analysis of PD. The darkness and entropy of the PD is compared with the different regions of the fundus image which is used to detect the pigments in the retinal image. Also the performance of the proposed algorithm is evaluated by using various performance metrics. The performance metrics are calculated for all 120 images of RIPS dataset. The performance metrics such as sensitivity, sensibility, specificity, accuracy, F-score, equal error rate, conformity coefficient, Jaccard\u27s coefficient, dice coefficient, universal quality index were calculated as 0.72, 0.96, 0.97, 0.62, 0.12, 0.09, 0.59, 0.45 and 0.62, respectively

    Diagnosis of Retinitis Pigmentosa from Retinal Images

    Get PDF
    Retinitis pigmentosa is a genetic disorder that results in nyctalopia and its progression leads to complete loss of vision. The analysis and the study of retinal images are necessary, so as to help ophthalmologist in early detection of the retinitis pigmentosa. In this paper fundus images and Optical Coherence Tomography images are comprehensively analyzed, so as to obtain the various morphological features that characterize the retinitis pigmentosa. Pigment Deposits, important trait of RP is investigated. Degree of darkness and entropy are the features used for analysis of PD. The darkness and entropy of the PD is compared with the different regions of the fundus image which is used to detect the pigments in the retinal image. Also the performance of the proposed algorithm is evaluated by using various performance metrics. The performance metrics are calculated for all 120 images of RIPS dataset. The performance metrics such as sensitivity, sensibility, specificity, accuracy, F-score, equal error rate, conformity coefficient, Jaccard's coefficient, dice coefficient, universal quality index were calculated as 0.72, 0.96, 0.97, 0.62, 0.12, 0.09, 0.59, 0.45 and 0.62, respectively

    A Novel Approach for Meat Quality Assessment Using an Ensemble of Compact Convolutional Neural Networks

    Get PDF
    The rising awareness of nutritional values has led to an increase in the popularity of meat-based diets. Hence, to ensure public health, industries and consumers are focusing more on the quality and freshness of this food. Authentic meat quality assessment methods can indeed be exorbitant and catastrophic. Furthermore, it is subjective and reliant on the knowledge of specialists. Fully automated computer-aided diagnosis systems are required to eradicate the variability among experts. However, evaluating the quality of meat automatically is challenging. Deep convolutional neural networks have shown a tremendous improvement in meat quality assessment. This research intends to utilize an ensemble framework of shallow convolutional neural networks for assessing the quality and freshness of the meat. Two compact CNN architectures (ConvNet-18 and ConvNet-24) are developed, and the efficacy of the models are evaluated using two publicly available databases. Experimental findings reveal that the ConvNet-18 outperforms other state-of-the models in classifying fresh and spoiled meat with an overall accuracy of 99.4%, whereas ConvNet-24 shows a better outcome in categorizing the meat based on its freshness. This model yields an accuracy of 96.6%, which is much better compared with standard models. Furthermore, the suggested models effectively detect the quality and freshness of the meat with less complexity than the existing state-of-the art techniques

    Detection of glaucoma from fundus image using pre-trained Densenet201 model

    Get PDF
    33-39In recent years, the performance of deep learning algorithms for image recognition has improved tremendously. The inherent ability of a convolutional neural network has made the task of classifying glaucoma and normal fundus images more appropriately. Transferring the weights from the pre-trained model resulted in faster and easier training than training the network from scratch. In this paper, a dense convolutional neural network (Densenet201) has been utilized to extract the relevant features for classification. Training with 80% of the images and testing with 20% of the images has been performed. The performance metrics obtained by various classifiers such as softmax, support vector machine (SVM), knearest neighbor (KNN), and Naive Bayes (NB) have been compared. Experimental results have shown that the softmax classifier outperformed the other classifiers with 96.48% accuracy, 98.88% sensitivity, 92.1% specificity, 95.82% precision, and 97.28% F1-score, with DRISHTI-GS1 database. An increase in the classification accuracy of about 1% has been achieved with enhanced fundus images

    Renyi entropy based Bi-histogram equalization for contrast enhancement of MRI brain images

    Get PDF
    5-11The quality of the MRI brain images is dependent on the sensor. It is essential to have a pre-processing technique to meet the finest quality at the sensor’s cost. A pre-processing algorithm has been proposed in this paper to enhance the low contrast MRI brain images. The input image’s histogram has been divided into two sub histograms using its median value to uphold the input image’s mean brightness. After calculating the Renyi entropy from the sub histogram, histogram clipping has been done to regulate the enhancement rate. The clipping limit has been selected automatically from the minimum value of the mean, median of the distribution function, and itself. Additionally, the proposed algorithm has incorporated the Discrete Cosine Transform (DCT) to improve the enhancement. Experimental results have shown that the proposed algorithm enhances the input image and maintains the mean brightness

    Detection of glaucoma from fundus image using pre-trained Densenet201 model

    Get PDF
    In recent years, the performance of deep learning algorithms for image recognition has improved tremendously. Theinherent ability of a convolutional neural network has made the task of classifying glaucoma and normal fundus imagesmore appropriately. Transferring the weights from the pre-trained model resulted in faster and easier training than trainingthe network from scratch. In this paper, a dense convolutional neural network (Densenet201) has been utilized to extract therelevant features for classification. Training with 80% of the images and testing with 20% of the images has beenperformed. The performance metrics obtained by various classifiers such as softmax, support vector machine (SVM), knearestneighbor (KNN), and Naive Bayes (NB) have been compared. Experimental results have shown that the softmaxclassifier outperformed the other classifiers with 96.48% accuracy, 98.88% sensitivity, 92.1% specificity, 95.82% precision,and 97.28% F1-score, with DRISHTI-GS1 database. An increase in the classification accuracy of about 1% has beenachieved with enhanced fundus images
    corecore