15 research outputs found

    A Survey on Biometrics based Digital Image Watermarking Techniques and Applications

    Get PDF
    The improvements in Internet technologies and growing demands on online multimedia businesses have made digital copyrighting as a major challenge for businesses that are associated with online content distribution via diverse business models including pay-per-view subscription trading etc Copyright protection and the evidence for rightful ownership are major issues associated with the distribution of any digital images Digital watermarking is a probable solution for digital content owners that offer security to the digital content In recent years digital watermarking plays a vital role in providing the apposite solution and numerous researches have been carried out In this paper an extensive review of the prevailing literature related to the Bio- watermarking is presented together with classification by utilizing an assortment of techniques In addition a terse introduction about the Digital Watermarking is presented to get acquainted with the vital information on the subject of Digital Watermarkin

    Data wiping tool: ByteEditor Technique

    Get PDF
    This Wiping Tool is an anti-forensic tool that is built to wipe data permanently from laptop’s storage. This tool is capable to ensure the data from being recovered with any recovery tools. The objective of building this wiping tool is to maintain the confidentiality and integrity of the data from unauthorized access. People tend to delete the file in normal way, however, the file face the risk of being recovered. Hence, the integrity and confidentiality of the deleted file cannot be protected. Through wiping tools, the files are overwritten with random strings to make the files no longer readable. Thus, the integrity and the confidentiality of the file can be protected. Regarding wiping tools, nowadays, lots of wiping tools face issue such as data breach because the wiping tools are unable to delete the data permanently from the devices. This situation might affect their main function and a threat to their users. Hence, a new wiping tool is developed to overcome the problem. A new wiping tool named Data Wiping tool is applying two wiping techniques. The first technique is Randomized Data while the next one is enhancing wiping technique, known as ByteEditor. ByteEditor is a combination of two different techniques, byte editing and byte deletion. With the implementation of Object�Oriented methodology, this wiping tool is built. This methodology consists of analyzing, designing, implementation and testing. The tool is analyzed and compared with other wiping tools before the designing of the tool start. Once the designing is done, implementation phase take place. The code of the tool is created using Visual Studio 2010 with C# language and being tested their functionality to ensure the developed tool meet the objectives of the project. This tool is believed able to contribute to the development of wiping tools and able to solve problems related to other wiping tools

    SECURING BIOMETRIC DATA

    Get PDF

    SECURING BIOMETRIC DATA

    Get PDF

    Rapid intelligent watermarking system for high-resolution grayscale facial images

    Get PDF
    Facial captures are widely used in many access control applications to authenticate individuals, and grant access to protected information and locations. For instance, in passport or smart card applications, facial images must be secured during the enrollment process, prior to exchange and storage. Digital watermarking may be used to assure integrity and authenticity of these facial images against unauthorized manipulations, through fragile and robust watermarking, respectively. It can also combine other biometric traits to be embedded as invisible watermarks in these facial captures to improve individual verification. Evolutionary Computation (EC) techniques have been proposed to optimize watermark embedding parameters in IntelligentWatermarking (IW) literature. The goal of such optimization problem is to find the trade-off between conflicting objectives of watermark quality and robustness. Securing streams of high-resolution biometric facial captures results in a large number of optimization problems of high dimension search space. For homogeneous image streams, the optimal solutions for one image block can be utilized for other image blocks having the same texture features. Therefore, the computational complexity for handling a stream of high-resolution facial captures is significantly reduced by recalling such solutions from an associative memory instead of re-optimizing the whole facial capture image. In this thesis, an associative memory is proposed to store the previously calculated solutions for different categories of texture using the optimization results of the whole image for few training facial images. A multi-hypothesis approach is adopted to store in the associative memory the solutions for different clustering resolutions (number of blocks clusters based on texture features), and finally select the optimal clustering resolution based on the watermarking metrics for each facial image during generalization. This approach was verified using streams of facial captures from PUT database (Kasinski et al., 2008). It was compared against a baseline system representing traditional IW methods with full optimization for all stream images. Both proposed and baseline systems are compared with respect to quality of solution produced and the computational complexity measured in fitness evaluations. The proposed approach resulted in a decrease of 95.5% in computational burden with little impact in watermarking performance for a stream of 198 facial images. The proposed framework Blockwise Multi-Resolution Clustering (BMRC) has been published in Machine Vision and Applications (Rabil et al., 2013a) Although the stream of high dimensionality optimization problems are replaced by few training optimizations, and then recalls from an associative memory storing the training artifacts. Optimization problems with high dimensionality search space are challenging, complex, and can reach up to dimensionality of 49k variables represented using 293k bits for high-resolution facial images. In this thesis, this large dimensionality problem is decomposed into smaller problems representing image blocks which resolves convergence problems with handling the larger problem. Local watermarking metrics are used in cooperative coevolution on block level to reach the overall solution. The elitism mechanism is modified such that the blocks of higher local watermarking metrics are fetched across all candidate solutions for each position, and concatenated together to form the elite candidate solutions. This proposed approach resulted in resolving premature convergence for traditional EC methods, and thus 17% improvement on the watermarking fitness is accomplished for facial images of resolution 2048×1536. This improved fitness is achieved using few iterations implying optimization speedup. The proposed algorithm Blockwise Coevolutionary Genetic Algorithm (BCGA) has been published in Expert Systems with Applications (Rabil et al., 2013c). The concepts and frameworks presented in this thesis can be generalized on any stream of optimization problems with large search space, where the candidate solutions consist of smaller granularity problems solutions that affect the overall solution. The challenge for applying this approach is finding the significant feature for this smaller granularity that affects the overall optimization problem. In this thesis the texture features of smaller granularity blocks represented in the candidate solutions are affecting the watermarking fitness optimization of the whole image. Also the local metrics of these smaller granularity problems are indicating the fitness produced for the larger problem. Another proposed application for this thesis is to embed offline signature features as invisible watermark embedded in facial captures in passports to be used for individual verification during border crossing. The offline signature is captured from forms signed at borders and verified against the embedded features. The individual verification relies on one physical biometric trait represented by facial captures and another behavioral trait represented by offline signature

    Multimodal biometrics scheme based on discretized eigen feature fusion for identical twins identification

    Get PDF
    The subject of twins multimodal biometrics identification (TMBI) has consistently been an interesting and also a valuable area of study. Considering high dependency and acceptance, TMBI greatly contributes to the domain of twins identification in biometrics traits. The variation of features resulting from the process of multimodal biometrics feature extraction determines the distinctive characteristics possessed by a twin. However, these features are deemed as inessential as they cause the increase in the search space size and also the difficulty in the generalization process. In this regard, the key challenge is to single out features that are deemed most salient with the ability to accurately recognize the twins using multimodal biometrics. In identification of twins, effective designs of methodology and fusion process are important in assuring its success. These processes could be used in the management and integration of vital information including highly selective biometrics characteristic possessed by any of the twins. In the multimodal biometrics twins identification domain, exemplification of the best features from multiple traits of twins and biometrics fusion process remain to be completely resolved. This research attempts to design a new scheme and more effective multimodal biometrics twins identification by introducing the Dis-Eigen feature-based fusion with the capacity in generating a uni-representation and distinctive features of numerous modalities of twins. First, Aspect United Moment Invariant (AUMI) was used as global feature in the extraction of features obtained from the twins handwritingfingerprint shape and style. Then, the feature-based fusion was examined in terms of its generalization. Next, to achieve better classification accuracy, the Dis-Eigen feature-based fusion algorithm was used. A total of eight distinctive classifiers were used in executing four different training and testing of environment settings. Accordingly, the most salient features of Dis-Eigen feature-based fusion were trained and tested to determine the accuracy of the classification, particularly in terms of performance. The results show that the identification of twins improved as the error of similarity for intra-class decreased while at the same time, the error of similarity for inter-class increased. Hence, with the application of diverse classifiers, the identification rate was improved reaching more than 93%. It can be concluded from the experimental outcomes that the proposed method using Receiver Operation Characteristics (ROC) considerably increases the twins handwriting-fingerprint identification process with 90.25% rate of identification when False Acceptance Rate (FAR) is at 0.01%. It is also indicated that 93.15% identification rate is achieved when FAR is at 0.5% and 98.69% when FAR is at 1.00%. The new proposed solution gives a promising alternative to twins identification application

    Writer Identification for chinese handwriting

    Get PDF
    Abstract Chinese handwriting identification has become a hot research in pattern recognition and image processing. In this paper, we present overview of relevant papers from the previous related studies until to the recent publications regarding to the Chinese Handwriting Identification. The strength, weaknesses, accurateness and comparison of well known approaches are reviewed, summarized and documented. This paper provides broad spectrum of pattern recognition technology in assisting writer identification tasks, which are at the forefront of forensic and biometrics based on identification application

    Establishing the digital chain of evidence in biometric systems

    Get PDF
    Traditionally, a chain of evidence or chain of custody refers to the chronological documentation, or paper trail, showing the seizure, custody, control, transfer, analysis, and disposition of evidence, physical or electronic. Whether in the criminal justice system, military applications, or natural disasters, ensuring the accuracy and integrity of such chains is of paramount importance. Intentional or unintentional alteration, tampering, or fabrication of digital evidence can lead to undesirable effects. We find despite the consequences at stake, historically, no unique protocol or standardized procedure exists for establishing such chains. Current practices rely on traditional paper trails and handwritten signatures as the foundation of chains of evidence.;Copying, fabricating or deleting electronic data is easier than ever and establishing equivalent digital chains of evidence has become both necessary and desirable. We propose to consider a chain of digital evidence as a multi-component validation problem. It ensures the security of access control, confidentiality, integrity, and non-repudiation of origin. Our framework, includes techniques from cryptography, keystroke analysis, digital watermarking, and hardware source identification. The work offers contributions to many of the fields used in the formation of the framework. Related to biometric watermarking, we provide a means for watermarking iris images without significantly impacting biometric performance. Specific to hardware fingerprinting, we establish the ability to verify the source of an image captured by biometric sensing devices such as fingerprint sensors and iris cameras. Related to keystroke dynamics, we establish that user stimulus familiarity is a driver of classification performance. Finally, example applications of the framework are demonstrated with data collected in crime scene investigations, people screening activities at port of entries, naval maritime interdiction operations, and mass fatality incident disaster responses
    corecore