13 research outputs found

    A multi-biometric iris recognition system based on a deep learning approach

    Get PDF
    YesMultimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. In this paper, an efficient and real-time multimodal biometric system is proposed based on building deep learning representations for images of both the right and left irises of a person, and fusing the results obtained using a ranking-level fusion method. The trained deep learning system proposed is called IrisConvNet whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from the input image without any domain knowledge where the input image represents the localized iris region and then classify it into one of N classes. In this work, a discriminative CNN training scheme based on a combination of back-propagation algorithm and mini-batch AdaGrad optimization method is proposed for weights updating and learning rate adaptation, respectively. In addition, other training strategies (e.g., dropout method, data augmentation) are also proposed in order to evaluate different CNN architectures. The performance of the proposed system is tested on three public datasets collected under different conditions: SDUMLA-HMT, CASIA-Iris- V3 Interval and IITD iris databases. The results obtained from the proposed system outperform other state-of-the-art of approaches (e.g., Wavelet transform, Scattering transform, Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases and a recognition time less than one second per person

    Region growing segmentation approach for image indexing and retrieval

    No full text
    We use region growing technique to segment the images. Based on the segmented region, we then select the size of the region to construct indexing keys. By using region growing technique on DCT image we reduce the number of region which is on the segmented regions only. Based on these regions, we then construct the indexing keys to match the images. Our technique will reduce the process time of constructing indexing keys. The indexing keys will then be constructed by calculating the regions distance. Our proposed of recursive region growing is not a new technique but its application on DCT images to build indexing keys is quite new and has not been presented by many other authors. © 2007 Taylor & Francis Group

    Region growing segmentation approach for image indexing and retrieval

    No full text
    We use region growing technique to segment the images. Based on the segmented region, we then select the size of the region to construct indexing keys. By using region growing technique on DCT image we reduce the number of region which is on the segmented regions only. Based on these regions, we then construct the indexing keys to match the images. Our technique will reduce the process time of constructing indexing keys. The indexing keys will then be constructed by calculating the regions distance. Our proposed of recursive region growing is not a new technique but its application on DCT images to build indexing keys is quite new and has not been presented by many other authors. © 2007 Taylor & Francis Group

    An efficient image retrieval through DC feature extraction

    No full text
    We propose a new simple method of DC feature extraction that enables to speed up and decrease storage needed in image retrieving process by aim of partial Joint Photographic Experts Group (JPEG) compressed images algorithm. Our feature extraction is carried out directly from JPEG compressed images. We extract one component of DCT coefficients which is DC component only. By applying DC feature, we only need 1/64 times that is required to construct indexing keys compared to conventional Euclidean distance method whilst maintaining high precision and recall. We also use a DCT coding category approach to compare our proposed method. After all, even though this method is not revolutionary and perfect yet in improving speed for image indexing, it reduces significantly the cost of image retrieval in real applications, primarily for huge image databases

    Extracting objects and events from MPEG videos for highlight-based indexing and retrieval

    No full text
    Automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this paper, we propose techniques to solve this problem using knowledge supported extraction of semantics, and compressed-domain processing is employed for efficiency. Firstly, knowledgebased rules are utilized for shot detection on extracted DCimages, and statistical skin detection is applied for human object detection. Secondly, through filtering outliers in motion vectors, improved detection of camera motions like zooming, panning and tilting are achieved. Video highlight high-level semantics are then automatically extracted via low-level analysis in the detection of human objects and camera motion events, and finally these highlights are taken for shot-level annotation, indexing and retrieval. Results using a large test video data set have demonstrated the accuracy and robustness of the proposed techniques

    Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection

    No full text
    Edge detection, especially from colour images, plays very important roles in many applications for image analysis, segmentation and recognition. Most existing methods extract colour edges via fusing edges detected from each colour components or detecting from the intensity image where inter-component information is ignored. In this study, an improved method on colour edge detection is proposed in which the significant advantage is the use of inter-component difference information for effective colour edge detection. For any given colour image C, a grey D-image is defined as the accumulative differences between each of its two colour components, and another grey R-image is then obtained by weighting of D-image and the grey intensity image G. The final edges are determined through fusion of edges extracted from R-image and G-image. Quantitative evaluations under various levels of Gaussian noise are achieved for further comparisons. Comprehensive results from different test images have proved that this approach outperforms edges detected from traditional colour spaces like RGB, YCbCr and HSV in terms of effectiveness and robustness

    Knowledge-based segmentation and semantic contents extraction from MPEG videos

    No full text
    Automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this paper, we propose techniques to solve this problem by using knowledge supported extraction of semantic contents, and compressed-domain processing is employed for efficiency. Firstly, video shots are detected by using knowledge-supported rules. Then, human objects are detected via statistical skin detection. Meanwhile, camera motion like zoom in is identified. Finally, highlights of zooming in human objects are extracted for further annotation, indexing and retrieval of the whole videos. Results from large data of test videos have demonstrated the accuracy and robustness of the proposed techniques

    Knowledge-supported segmentation and semantic contents extraction from MPEG videos for highlight-based annotation, indexing and retrieval

    No full text
    Automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this paper, we propose techniques to solve this problem by using knowledge supported extraction of semantic contents, and compressed-domain processing is employed for efficiency. Firstly, video shots are detected by using knowledge-supported rules. Then, human objects are detected via statistical skin detection. Meanwhile, camera motion like zoom in is identified. Finally, highlights of zooming in human objects are extracted and used for annotation, indexing and retrieval of the whole videos. Results from large data of test videos have demonstrated the accuracy and robustness of the proposed techniques

    Face detection based on skin color in image by neural networks

    No full text
    Face detection is one of the challenging problems in the image processing . A novel face detection system is presented in this paper. The approach relies on skin based color , while features extracted from two dimentional Discreate cosine transfer (DCT) and neural networks . which can be used to detect faces by using skin color from DCT coefficient of Cb and Cr feature vectors. This system contains the skin color which is the main feature of faces for detection ,and then the skin face candidate is examined by using the neural networks, which learns from the feature of faces to classify whether the original image includes a face or not. The processing stage is based on normalization and Discreate Cosin transfer (DCT). Finally the classification based on neural networks approch. The expriments results on upright frontal color face images from the internt show an a exellent detection rate. ©2007 IEEE

    Face detection based on skin color in image by neural networks

    No full text
    Face detection is one of the challenging problems in the image processing . A novel face detection system is presented in this paper. The approach relies on skin based color , while features extracted from two dimentional Discreate cosine transfer (DCT) and neural networks . which can be used to detect faces by using skin color from DCT coefficient of Cb and Cr feature vectors. This system contains the skin color which is the main feature of faces for detection ,and then the skin face candidate is examined by using the neural networks, which learns from the feature of faces to classify whether the original image includes a face or not. The processing stage is based on normalization and Discreate Cosin transfer (DCT). Finally the classification based on neural networks approch. The expriments results on upright frontal color face images from the internt show an a exellent detection rate. ©2007 IEEE
    corecore