278 research outputs found

    Development of Multirate Filter – Based Region Features for Iris Identification

    Get PDF
    The emergence of biometric system is seen as the next-generation technological solution in strengthening the social and national security. The evolution of biometrics has shifted the paradigm of authentication from classical token and knowledge-based systems to physiological and behavioral trait based systems. R & D on iris biometrics, in last one decade, has established it as one of the most promising traits. Even though, iris biometric takes high resolution near-infrared (NIR) images as input, its authentication accuracy is very commendable. Its performance is often influenced by the presence of noise, database size, and feature representation. This thesis focuses on the use of multi resolution analysis (MRA) in developing suitable features for non-ideal iris images. Our investigation starts with the iris feature extraction technique using Cohen −Daubechies − Feauveau 9/7 (CDF 9/7) filter bank. In this work, a technique has been proposed to deal with issues like segmentation failure and occlusion. The experimental studies deal with the superiority of CDF 9/7 filter bank over the frequency based techniques. Since there is scope for improving the frequency selectivity of CDF 9/7 filter bank, a tunable filter bank is proposed to extract region based features from non-cooperative iris images. The proposed method is based on half band polynomial of 14th order. Since, regularity and frequency selectivity are in inverse relationship with each other, filter coefficients are derived by not imposing maximum number of zeros. Also, the half band polynomial is presented in x-domain, so as to apply semidefinite programming, which results in optimization of coefficients of analysis/synthesis filter. The next contribution in this thesis deals with the development of another powerful MRA known as triplet half band filter bank (THFB). The advantage of THFB is the flexibility in choosing the frequency response that allows one to overcome the magnitude constraints. The proposed filter bank has improved frequency selectivity along with other desired properties, which is then used for iris feature extraction. The last contribution of the thesis describes a wavelet cepstral feature derived from CDF 9/7 filter bank to characterize iris texture. Wavelet cepstrum feature helps in reducing the dimensionality of the detail coefficients; hence, a compact feature presentation is possible with improved accuracy against CDF 9/7. The efficacy of the features suggested are validated for iris recognition on three publicly available databases namely, CASIAv3, UBIRISv1, and IITD. The features are compared with other transform domain features like FFT, Gabor filter and a comprehensive evaluation is done for all suggested features as well. It has been observed that the suggested features show superior performance with respect to accuracy. Among all suggested features, THFB has shown best performance

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit

    Application of Stochastic Diffusion for Hiding High Fidelity Encrypted Images

    Get PDF
    Cryptography coupled with information hiding has received increased attention in recent years and has become a major research theme because of the importance of protecting encrypted information in any Electronic Data Interchange system in a way that is both discrete and covert. One of the essential limitations in any cryptography system is that the encrypted data provides an indication on its importance which arouses suspicion and makes it vulnerable to attack. Information hiding of Steganography provides a potential solution to this issue by making the data imperceptible, the security of the hidden information being a threat only if its existence is detected through Steganalysis. This paper focuses on a study methods for hiding encrypted information, specifically, methods that encrypt data before embedding in host data where the ‘data’ is in the form of a full colour digital image. Such methods provide a greater level of data security especially when the information is to be submitted over the Internet, for example, since a potential attacker needs to first detect, then extract and then decrypt the embedded data in order to recover the original information. After providing an extensive survey of the current methods available, we present a new method of encrypting and then hiding full colour images in three full colour host images with out loss of fidelity following data extraction and decryption. The application of this technique, which is based on a technique called ‘Stochastic Diffusion’ are wide ranging and include covert image information interchange, digital image authentication, video authentication, copyright protection and digital rights management of image data in general

    Wavelet Theory

    Get PDF
    The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior

    Multi-resolution dental image registration based on genetic algorithm

    Get PDF
    The Automated Dental Identification System (ADIS) is a Post Mortem Dental Identification System. This thesis presents dental image registration, required for the preprocessing steps of the image comparison component of ADIS. We proposed a multi resolution dental image registration based on genetic algorithms. The main objective of this research is to develop techniques for registration of extracted subject regions of interest with corresponding reference regions of interest.;We investigated and implemented registration using two multi resolution techniques namely image sub sampling and wavelet decomposition. Multi resolution techniques help in the reduction of search data since initial registration is carried at lower levels and results are updated as the levels of resolutions increase. We adopted edges as image features that needed to be aligned. Affine transformations were selected to transform the subject dental region of interest to achieve better alignment with the reference region of interest. These transformations are known to capture complex image distortions. The similarity between subject and reference image has been computed using Oriented Hausdorff Similarity measure that is robust to severe noise and image degradations. A genetic algorithm was adopted to search for the best transformation parameters that give maximum similarity score.;Testing results show that the developed registration algorithm yielded reasonable results in accuracy for dental test cases that contained slight misalignments. The relative percentage errors between the known and estimated transformation parameters were less than 20% with a termination criterion of a ten minute time limit. Further research is needed for dental cases that contain high degree of misalignment, noise and distortions

    Biometric iris image segmentation and feature extraction for iris recognition

    Get PDF
    PhD ThesisThe continued threat to security in our interconnected world today begs for urgent solution. Iris biometric like many other biometric systems provides an alternative solution to this lingering problem. Although, iris recognition have been extensively studied, it is nevertheless, not a fully solved problem which is the factor inhibiting its implementation in real world situations today. There exists three main problems facing the existing iris recognition systems: 1) lack of robustness of the algorithm to handle non-ideal iris images, 2) slow speed of the algorithm and 3) the applicability to the existing systems in real world situation. In this thesis, six novel approaches were derived and implemented to address these current limitation of existing iris recognition systems. A novel fast and accurate segmentation approach based on the combination of graph-cut optimization and active contour model is proposed to define the irregular boundaries of the iris in a hierarchical 2-level approach. In the first hierarchy, the approximate boundary of the pupil/iris is estimated using a method based on Hough’s transform for the pupil and adapted starburst algorithm for the iris. Subsequently, in the second hierarchy, the final irregular boundary of the pupil/iris is refined and segmented using graph-cut based active contour (GCBAC) model proposed in this work. The segmentation is performed in two levels, whereby the pupil is segmented first before the iris. In order to detect and eliminate noise and reflection artefacts which might introduce errors to the algorithm, a preprocessing technique based on adaptive weighted edge detection and high-pass filtering is used to detect reflections on the high intensity areas of the image while exemplar based image inpainting is used to eliminate the reflections. After the segmentation of the iris boundaries, a post-processing operation based on combination of block classification method and statistical prediction approach is used to detect any super-imposed occluding eyelashes/eyeshadows. The normalization of the iris image is achieved though the rubber sheet model. In the second stage, an approach based on construction of complex wavelet filters and rotation of the filters to the direction of the principal texture direction is used for the extraction of important iris information while a modified particle swam optimization (PSO) is used to select the most prominent iris features for iris encoding. Classification of the iriscode is performed using adaptive support vector machines (ASVM). Experimental results demonstrate that the proposed approach achieves accuracy of 98.99% and is computationally about 2 times faster than the best existing approach.Ebonyi State University and Education Task Fund, Nigeri

    Human-Centric Machine Vision

    Get PDF
    Recently, the algorithms for the processing of the visual information have greatly evolved, providing efficient and effective solutions to cope with the variability and the complexity of real-world environments. These achievements yield to the development of Machine Vision systems that overcome the typical industrial applications, where the environments are controlled and the tasks are very specific, towards the use of innovative solutions to face with everyday needs of people. The Human-Centric Machine Vision can help to solve the problems raised by the needs of our society, e.g. security and safety, health care, medical imaging, and human machine interface. In such applications it is necessary to handle changing, unpredictable and complex situations, and to take care of the presence of humans

    Application of variational mode decomposition in vibration analysis of machine components

    Get PDF
    Monitoring and diagnosis of machinery in maintenance are often undertaken using vibration analysis. The machine vibration signal is invariably complex and diverse, and thus useful information and features are difficult to extract. Variational mode decomposition (VMD) is a recent signal processing method that able to extract some of important features from machine vibration signal. The performance of the VMD method depends on the selection of its input parameters, especially the mode number and balancing parameter (also known as quadratic penalty term). However, the current VMD method is still using a manual effort to extract the input parameters where it subjects to interpretation of experienced experts. Hence, machine diagnosis becomes time consuming and prone to error. The aim of this research was to propose an automated parameter selection method for selecting the VMD input parameters. The proposed method consisted of two-stage selections where the first stage selection was used to select the initial mode number and the second stage selection was used to select the optimized mode number and balancing parameter. A new machine diagnosis approach was developed, named as VMD Differential Evolution Algorithm (VMDEA)-Extreme Learning Machine (ELM). Vibration signal datasets were then reconstructed using VMDEA and the multi-domain features consisted of time-domain, frequency-domain and multi-scale fuzzy entropy were extracted. It was demonstrated that the VMDEA method was able to reduce the computational time about 14% to 53% as compared to VMD-Genetic Algorithm (GA), VMD-Particle Swarm Optimization (PSO) and VMD-Differential Evolution (DE) approaches for bearing, shaft and gear. It also exhibited a better convergence with about two to nine less iterations as compared to VMD-GA, VMD-PSO and VMD-DE for bearing, shaft and gear. The VMDEA-ELM was able to illustrate higher classification accuracy about 11% to 20% than Empirical Mode Decomposition (EMD)-ELM, Ensemble EMD (EEMD)-ELM and Complimentary EEMD (CEEMD)-ELM for bearing shaft and gear. The bearing datasets from Case Western Reserve University were tested with VMDEA-ELM model and compared with Support Vector Machine (SVM)-Dempster-Shafer (DS), EEMD Optimal Mode Multi-scale Fuzzy Entropy Fault Diagnosis (EOMSMFD), Wavelet Packet Transform (WPT)-Local Characteristic-scale Decomposition (LCD)- ELM, and Arctangent S-shaped PSO least square support vector machine (ATSWPLM) models in term of its classification accuracy. The VMDEA-ELM model demonstrates better diagnosis accuracy with small differences between 2% to 4% as compared to EOMSMFD and WPT-LCD-ELM but less diagnosis accuracy in the range of 4% to 5% as compared to SVM-DS and ATSWPLM. The diagnosis approach VMDEA-ELM was also able to provide faster classification performance about 6 40 times faster than Back Propagation Neural Network (BPNN) and Support Vector Machine (SVM). This study provides an improved solution in determining an optimized VMD parameters by using VMDEA. It also demonstrates a more accurate and effective diagnostic approach for machine maintenance using VMDEA-ELM

    Performance comparison of intrusion detection systems and application of machine learning to Snort system

    Get PDF
    This study investigates the performance of two open source intrusion detection systems (IDSs) namely Snort and Suricata for accurately detecting the malicious traffic on computer networks. Snort and Suricata were installed on two different but identical computers and the performance was evaluated at 10 Gbps network speed. It was noted that Suricata could process a higher speed of network traffic than Snort with lower packet drop rate but it consumed higher computational resources. Snort had higher detection accuracy and was thus selected for further experiments. It was observed that the Snort triggered a high rate of false positive alarms. To solve this problem a Snort adaptive plug-in was developed. To select the best performing algorithm for Snort adaptive plug-in, an empirical study was carried out with different learning algorithms and Support Vector Machine (SVM) was selected. A hybrid version of SVM and Fuzzy logic produced a better detection accuracy. But the best result was achieved using an optimised SVM with firefly algorithm with FPR (false positive rate) as 8.6% and FNR (false negative rate) as 2.2%, which is a good result. The novelty of this work is the performance comparison of two IDSs at 10 Gbps and the application of hybrid and optimised machine learning algorithms to Snort

    A Survey on Biometrics and Cancelable Biometrics Systems

    Get PDF
    Now-a-days, biometric systems have replaced the password or token based authentication system in many fields to improve the security level. However, biometric system is also vulnerable to security threats. Unlike password based system, biometric templates cannot be replaced if lost or compromised. To deal with the issue of the compromised biometric template, template protection schemes evolved to make it possible to replace the biometric template. Cancelable biometric is such a template protection scheme that replaces a biometric template when the stored template is stolen or lost. It is a feature domain transformation where a distorted version of a biometric template is generated and matched in the transformed domain. This paper presents a review on the state-of-the-art and analysis of different existing methods of biometric based authentication system and cancelable biometric systems along with an elaborate focus on cancelable biometrics in order to show its advantages over the standard biometric systems through some generalized standards and guidelines acquired from the literature. We also proposed a highly secure method for cancelable biometrics using a non-invertible function based on Discrete Cosine Transformation (DCT) and Huffman encoding. We tested and evaluated the proposed novel method for 50 users and achieved good results
    corecore