8 research outputs found

    An automatic corneal subbasal nerve registration system using FFT and phase correlation techniques for an accurate DPN diagnosis

    Get PDF
    yesConfocal microscopy is employed as a fast and non-invasive way to capture a sequence of images from different layers and membranes of the cornea. The captured images are used to extract useful and helpful clinical information for early diagnosis of corneal diseases such as, Diabetic Peripheral Neuropathy (DPN). In this paper, an automatic corneal subbasal nerve registration system is proposed. The main aim of the proposed system is to produce a new informative corneal image that contains structural and functional information. In addition a colour coded corneal image map is produced by overlaying a sequence of Cornea Confocal Microscopy (CCM) images that differ in their displacement, illumination, scaling, and rotation to each other. An automatic image registration method is proposed based on combining the advantages of Fast Fourier Transform (FFT) and phase correlation techniques. The proposed registration algorithm searches for the best common features between a number of sequenced CCM images in the frequency domain to produce the formative image map. In this generated image map, each colour represents the severity level of a specific clinical feature that can be used to give ophthalmologists a clear and precise representation of the extracted clinical features from each nerve in the image map. Moreover, successful implementation of the proposed system and the availability of the required datasets opens the door for other interesting ideas; for instance, it can be used to give ophthalmologists a summarized and objective description about a diabetic patient’s health status using a sequence of CCM images that have been captured from different imaging devices and/or at different time

    A Fast and Accurate Iris Localization Technique for Healthcare Security System

    Get PDF
    yesIn the health care systems, a high security level is required to protect extremely sensitive patient records. The goal is to provide a secure access to the right records at the right time with high patient privacy. As the most accurate biometric system, the iris recognition can play a significant role in healthcare applications for accurate patient identification. In this paper, the corner stone towards building a fast and robust iris recognition system for healthcare applications is addressed, which is known as iris localization. Iris localization is an essential step for efficient iris recognition systems. The presence of extraneous features such as eyelashes, eyelids, pupil and reflection spots make the correct iris localization challenging. In this paper, an efficient and automatic method is presented for the inner and outer iris boundary localization. The inner pupil boundary is detected after eliminating specular reflections using a combination of thresholding and morphological operations. Then, the outer iris boundary is detected using the modified Circular Hough transform. An efficient preprocessing procedure is proposed to enhance the iris boundary by applying 2D Gaussian filter and Histogram equalization processes. In addition, the pupil’s parameters (e.g. radius and center coordinates) are employed to reduce the search time of the Hough transform by discarding the unnecessary edge points within the iris region. Finally, a robust and fast eyelids detection algorithm is developed which employs an anisotropic diffusion filter with Radon transform to fit the upper and lower eyelids boundaries. The performance of the proposed method is tested on two databases: CASIA Version 1.0 and SDUMLA-HMT iris database. The Experimental results demonstrate the efficiency of the proposed method. Moreover, a comparative study with other established methods is also carried out

    A Robust Face Recognition System Based on Curvelet and Fractal Dimension Transforms

    Get PDF
    yesn this paper, a powerful face recognition system for authentication and identification tasks is presented and a new facial feature extraction approach is proposed. A novel feature extraction method based on combining the characteristics of the Curvelet transform and Fractal dimension transform is proposed. The proposed system consists of four stages. Firstly, a simple preprocessing algorithm based on a sigmoid function is applied to standardize the intensity dynamic range in the input image. Secondly, a face detection stage based on the Viola-Jones algorithm is used for detecting the face region in the input image. After that, the feature extraction stage using a combination of the Digital Curvelet via wrapping transform and a Fractal Dimension transform is implemented. Finally, the K-Nearest Neighbor (K-NN) and Correlation Coefficient (CC) Classifiers are used in the recognition task. Lastly, the performance of the proposed approach has been tested by carrying out a number of experiments on three well-known datasets with high diversity in the facial expressions: SDUMLA-HMT, Faces96 and UMIST datasets. All the experiments conducted indicate the robustness and the effectiveness of the proposed approach for both authentication and identification tasks compared to other established approaches

    A fully automatic nerve segmentation and morphometric parameter quantification system for early diagnosis of diabetic neuropathy in corneal images

    Get PDF
    Diabetic Peripheral Neuropathy (DPN) is one of the most common types of diabetes that can affect the cornea. An accurate analysis of the nerve structures can assist the early diagnosis of this disease. This paper proposes a robust, fast and fully automatic nerve segmentation and morphometric parameter quantification system for corneal confocal microscope images. The segmentation part consists of three main steps. First, a preprocessing step is applied to enhance the visibility of the nerves and remove noise using anisotropic diffusion filtering, specifically a Coherence filter followed by Gaussian filtering. Second, morphological operations are applied to remove unwanted objects in the input image such as epithelial cells and small nerve segments. Finally, an edge detection step is applied to detect all the nerves in the input image. In this step, an efficient algorithm for connecting discontinuous nerves is proposed. In the morphometric parameters quantification part, a number of features are extracted, including thickness, tortuosity and length of nerve, which may be used for the early diagnosis of diabetic polyneuropathy and when planning Laser-Assisted in situ Keratomileusis (LASIK) or Photorefractive keratectomy (PRK). The performance of the proposed segmentation system is evaluated against manually traced ground-truth images based on a database consisting of 498 corneal sub-basal nerve images (238 are normal and 260 are abnormal). In addition, the robustness and efficiency of the proposed system in extracting morphometric features with clinical utility was evaluated in 919 images taken from healthy subjects and diabetic patients with and without neuropathy. We demonstrate rapid (13 seconds/image), robust and effective automated corneal nerve quantification. The proposed system will be deployed as a useful clinical tool to support the expertise of ophthalmologists and save the clinician time in a busy clinical setting

    A multimodal deep learning framework using local feature representations for face recognition

    Get PDF
    YesThe most recent face recognition systems are mainly dependent on feature representations obtained using either local handcrafted-descriptors, such as local binary patterns (LBP), or use a deep learning approach, such as deep belief network (DBN). However, the former usually suffers from the wide variations in face images, while the latter usually discards the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. Firstly, a novel multimodal local feature extraction approach based on merging the advantages of the Curvelet transform with Fractal dimension is proposed and termed the Curvelet–Fractal approach. The main motivation of this approach is that theCurvelet transform, a newanisotropic and multidirectional transform, can efficiently represent themain structure of the face (e.g., edges and curves), while the Fractal dimension is one of the most powerful texture descriptors for face images. Secondly, a novel framework is proposed, termed the multimodal deep face recognition (MDFR)framework, to add feature representations by training aDBNon top of the local feature representations instead of the pixel intensity representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary to those acquired by the Curvelet–Fractal approach. Finally, the performance of the proposed approaches has been evaluated by conducting a number of extensive experiments on four large-scale face datasets: the SDUMLA-HMT, FERET, CAS-PEAL-R1, and LFW databases. The results obtained from the proposed approaches outperform other state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by achieving new state-of-the-art results on all the employed datasets

    A multi-biometric iris recognition system based on a deep learning approach

    Get PDF
    YesMultimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. In this paper, an efficient and real-time multimodal biometric system is proposed based on building deep learning representations for images of both the right and left irises of a person, and fusing the results obtained using a ranking-level fusion method. The trained deep learning system proposed is called IrisConvNet whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from the input image without any domain knowledge where the input image represents the localized iris region and then classify it into one of N classes. In this work, a discriminative CNN training scheme based on a combination of back-propagation algorithm and mini-batch AdaGrad optimization method is proposed for weights updating and learning rate adaptation, respectively. In addition, other training strategies (e.g., dropout method, data augmentation) are also proposed in order to evaluate different CNN architectures. The performance of the proposed system is tested on three public datasets collected under different conditions: SDUMLA-HMT, CASIA-Iris- V3 Interval and IITD iris databases. The results obtained from the proposed system outperform other state-of-the-art of approaches (e.g., Wavelet transform, Scattering transform, Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases and a recognition time less than one second per person

    CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells

    No full text
    The quantification of corneal endothelial cell (CEC) morphology using manual and semi-automatic software enables an objective assessment of corneal endothelial pathology. However, the procedure is tedious, subjective, and not widely applied in clinical practice. We have developed the CellsDeepNet system to automatically segment and analyse the CEC morphology. The CellsDeepNet system uses Contrast-Limited Adaptive Histogram Equalization (CLAHE) to improve the contrast of the CEC images and reduce the effects of non-uniform image illumination, 2D Double-Density Dual-Tree Complex Wavelet Transform (2DDD-TCWT) to reduce noise, Butterworth Bandpass filter to enhance the CEC edges, and moving average filter to adjust for brightness level. An improved version of U-Net was used to detect the boundaries of the CECs, regardless of the CEC size. CEC morphology was measured as mean cell density (MCD, cell/mm2), mean cell area (MCA, μm2), mean cell perimeter (MCP, μm), polymegathism (coefficient of CEC size variation), and pleomorphism (percentage of hexagonality coefficient). The CellsDeepNet system correlated highly significantly with the manual estimations for MCD (r = 0.94), MCA (r = 0.99), MCP (r = 0.99), polymegathism (r = 0.92), and pleomorphism (r = 0.86), with p < 0.0001 for all the extracted clinical features. The Bland–Altman plots showed excellent agreement. The percentage difference between the manual and automated estimations was superior for the CellsDeepNet system compared to the CEAS system and other state-of-the-art CEC segmentation systems on three large and challenging corneal endothelium image datasets captured using two different ophthalmic devices

    ReID-DeePNet: A Hybrid Deep Learning System for Person Re-Identification

    No full text
    Person re-identification has become an essential application within computer vision due to its ability to match the same person over non-overlapping cameras. However, it is a challenging task because of the broad view of cameras with a large number of pedestrians appearing with various poses. As a result, various approaches of supervised model learning have been utilized to locate and identify a person based on the given input. Nevertheless, several of these approaches perform worse than expected in retrieving the right person in real-time over multiple CCTVs/camera views. This is due to inaccurate segmentation of the person, leading to incorrect classification. This paper proposes an efficient and real-time person re-identification system, named ReID-DeePNet system. It is based on fusing the matching scores generated by two different deep learning models, convolutional neural network and deep belief network, to extract discriminative feature representations from the pedestrian image. Initially, a segmentation procedure was developed based on merging the advantages of the Mask R-CNN and GrabCut algorithm to tackle the adverse effects caused by background clutter. Afterward, the two different deep learning models extracted discriminative feature representations from the pedestrian segmented image, and their matching scores were fused to make the final decision. Several extensive experiments were conducted, using three large-scale and challenging person re-identification datasets: Market-1501, CUHK03, and P-DESTRE. The ReID-DeePNet system achieved new state-of-the-art Rank-1 and mAP values on these three challenging ReID datasets
    corecore