602 research outputs found

    Process of Fingerprint Authentication using Cancelable Biohashed Template

    Get PDF
    Template protection using cancelable biometrics prevents data loss and hacking stored templates, by providing considerable privacy and security. Hashing and salting techniques are used to build resilient systems. Salted password method is employed to protect passwords against different types of attacks namely brute-force attack, dictionary attack, rainbow table attacks. Salting claims that random data can be added to input of hash function to ensure unique output. Hashing salts are speed bumps in an attacker’s road to breach user’s data. Research proposes a contemporary two factor authenticator called Biohashing. Biohashing procedure is implemented by recapitulated inner product over a pseudo random number generator key, as well as fingerprint features that are a network of minutiae. Cancelable template authentication used in fingerprint-based sales counter accelerates payment process. Fingerhash is code produced after applying biohashing on fingerprint. Fingerhash is a binary string procured by choosing individual bit of sign depending on a preset threshold. Experiment is carried using benchmark FVC 2002 DB1 dataset. Authentication accuracy is found to be nearly 97\%. Results compared with state-of art approaches finds promising

    Searches for baryon number violation in neutrino experiments: a white paper

    Get PDF
    Baryon number conservation is not guaranteed by any fundamental symmetry within the standard model, and therefore has been a subject of experimental and theoretical scrutiny for decades. So far, no evidence for baryon number violation has been observed. Large underground detectors have long been used for both neutrino detection and searches for baryon number violating processes. The next generation of large neutrino detectors will seek to improve upon the limits set by past and current experiments and will cover a range of lifetimes predicted by several Grand Unified Theories. In this White Paper, we summarize theoretical motivations and experimental aspects of searches for baryon number violation in neutrino experiments

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    Intelligent interface agents for biometric applications

    Get PDF
    This thesis investigates the benefits of applying the intelligent agent paradigm to biometric identity verification systems. Multimodal biometric systems, despite their additional complexity, hold the promise of providing a higher degree of accuracy and robustness. Multimodal biometric systems are examined in this work leading to the design and implementation of a novel distributed multi-modal identity verification system based on an intelligent agent framework. User interface design issues are also important in the domain of biometric systems and present an exceptional opportunity for employing adaptive interface agents. Through the use of such interface agents, system performance may be improved, leading to an increase in recognition rates over a non-adaptive system while producing a more robust and agreeable user experience. The investigation of such adaptive systems has been a focus of the work reported in this thesis. The research presented in this thesis is divided into two main parts. Firstly, the design, development and testing of a novel distributed multi-modal authentication system employing intelligent agents is presented. The second part details design and implementation of an adaptive interface layer based on interface agent technology and demonstrates its integration with a commercial fingerprint recognition system. The performance of these systems is then evaluated using databases of biometric samples gathered during the research. The results obtained from the experimental evaluation of the multi-modal system demonstrated a clear improvement in the accuracy of the system compared to a unimodal biometric approach. The adoption of the intelligent agent architecture at the interface level resulted in a system where false reject rates were reduced when compared to a system that did not employ an intelligent interface. The results obtained from both systems clearly express the benefits of combining an intelligent agent framework with a biometric system to provide a more robust and flexible application

    Fusion of fingerprint presentation attacks detection and matching: a real approach from the LivDet perspective

    Get PDF
    The liveness detection ability is explicitly required for current personal verification systems in many security applications. As a matter of fact, the project of any biometric verification system cannot ignore the vulnerability to spoofing or presentation attacks (PAs), which must be addressed by effective countermeasures from the beginning of the design process. However, despite significant improvements, especially by adopting deep learning approaches to fingerprint Presentation Attack Detectors (PADs), current research did not state much about their effectiveness when embedded in fingerprint verification systems. We believe that the lack of works is explained by the lack of instruments to investigate the problem, that is, modelling the cause-effect relationships when two systems (spoof detection and matching) with non-zero error rates are integrated. To solve this lack of investigations in the literature, we present in this PhD thesis a novel performance simulation model based on the probabilistic relationships between the Receiver Operating Characteristics (ROC) of the two systems when implemented sequentially. As a matter of fact, this is the most straightforward, flexible, and widespread approach. We carry out simulations on the PAD algorithms’ ROCs submitted to the editions of LivDet 2017-2019, the NIST Bozorth3, and the top-level VeriFinger 12.0 matchers. With the help of this simulator, the overall system performance can be predicted before actual implementation, thus simplifying the process of setting the best trade-off among error rates. In the second part of this thesis, we exploit this model to define a practical evaluation criterion to assess whether operational points of the PAD exist that do not alter the expected or previous performance given by the verification system alone. Experimental simulations coupled with the theoretical expectations confirm that this trade-off allows a complete view of the sequential embedding potentials worthy of being extended to other integration approaches

    Detecting Double-Identity Fingerprint Attacks

    Get PDF
    Double-identity biometrics, that is the combination of two subjects features into a single template, was demonstrated to be a serious threat against existing biometric systems. In fact, well-synthetized samples can fool state-of-the-art biometric verification systems, leading them to falsely accept both the contributing subjects. This work proposes one of the first techniques to defy existing double-identity fingerprint attacks. The proposed approach inspects the regions where the two aligned fingerprints overlap but minutiae cannot be consistently paired. If the quality of these regions is good enough to minimize the risk of false or miss minutiae detection, then the alarm score is increased. Experimental results carried out on two fingerprint databases, with two different techniques to generate double-identity fingerprints, validate the effectiveness of the proposed approach

    BioLeak: Exploiting Cache Timing to Recover Fingerprint Minutiae Coordinates

    Get PDF

    Diagnostic reasoning approaches and success rates in bomb disposal

    Get PDF
    As professions, medicine and bomb disposal have many similarities, with one easily recognizable commonality being that practitioners in both disciplines rely on decision-making that is objective, dispassionate, and to the largest extent possible, grounded in scientific theory. Using research methodologies honed over decades in the medical community, this study investigates diagnostic reasoning approaches and success rates in the bomb disposal community, as viewed through the lens of improvised explosive device (IED) circuit analysis, which includes component identification, hazard assessment, and circuit type-by-function determination. The population for this study consisted of current and former military and civilian bomb technicians, and factors such as years of bomb disposal experience, length of initial training, and specialized IED training were analyzed to determine effects on success rates. A convergent mixed-methods design with a pragmatistic worldview was used, and the data gathered suggests that overall, no variables assessed had any effect on a bomb technician’s ability to successfully perform component identification, assessment of associated hazards, and determination of circuit type-by-function. Quantitatively, average success rates for study participants, by independent variable, showed no statistically significant differences, except for those who attended specific bomb disposal schools for their initial training, and only for circuit type-by-function determinations. Average success rates for study participants were 20% for component identification; 16% for associated hazards; and 51% for circuit type-by-function. Qualitatively, over 90% of participants used Type 1 decision-making (i.e., heuristics and pattern matching) as their diagnostic reasoning approach, and focused on component identification and circuit configurations in determining hazards associated with devices, and circuit type-by-function. Additionally, an analysis of component and hazard selections clearly suggests that bomb technicians key in on specific components, and these selections drive their further analysis. Self-assessed confidence-level data also suggests that study participants significantly over-rated their ability to recognize components, assess hazards, and determine circuit type-by-function. The results of this study can be used by thought leaders and trainers in the bomb disposal community to push for fostering and improving diagnostic reasoning skills, problem-solving, and critical thinking, which in turn should lead to a reduction in operational errors during IED response operations
    • …
    corecore