119 research outputs found

    Topological coding of single fingerprints

    Get PDF
    The motivation for seeking topological descriptions of single fingerprints is provided by the elasticity of the human skin; successive impressions from the same finger will in-variably have suffered a degree of relative distortion (translation, rotation and stretching). Topology based systems should be free from the detrimental effects of plastic distortion. This thesis is divided into three parts: part I outlines the traditional use of fingerprints as a basis for personal identification and gives detailed explanation of the arguments in favour of topological coding. Methods for the extraction of topology based digital codes are suggested and the ā€˜placing of linesā€™ is introduced as an effective means of ordering topological information. In part II specific systems are described for the extraction of simple topological codes from rolled impressions of the pattern types ā€˜ loopsā€™ , ā€˜whorlsā€™ and ā€˜archesā€™ . The generated codes take the form of vectors or simple digital arrays. The nature and frequency of changes that may occur in such codes is investigated and fingerprint comparison algorithms, based on these topological codes, are developed. The objective of such algorithms is to draw a score derived from the degree of ā€˜nearnessā€™ of the topological codes in such a manner that it intelligently reflects similarity or dissimilarity in the two prints under comparison. Part III examines the special problems relating to fragmentary ā€˜scenes-of-crimeā€™ marks. It describes methods of coding fingerprint patterns by a variety of ā€˜topological coordinate schemesā€™ , with fingerprint comparison being performed on the basis of localised topological information which is extracted from the recorded coordinate sets. Furthermore, a method for pictorial reconstruction of a complete fingerprint, from its coordinate representation, is demonstrated. Comparison of fingerprints on the basis of digital topological descriptions is shown to offer a substantial improvement in performance over existing (spatial) techniques

    Coarse-grained modeling for molecular discovery:Applications to cardiolipin-selectivity

    Get PDF
    The development of novel materials is pivotal for addressing global challenges such as achieving sustainability, technological progress, and advancements in medical technology. Traditionally, developing or designing new molecules was a resource-intensive endeavor, often reliant on serendipity. Given the vast space of chemically feasible drug-like molecules, estimated between 106 - 10100 compounds, traditional in vitro techniques fall short.Consequently, in silico tools such as virtual screening and molecular modeling have gained increasing recognition. However, the computational cost and the limited precision of the utilized molecular models still limit computational molecular design.This thesis aimed to enhance the molecular design process by integrating multiscale modeling and free energy calculations. Employing a coarse-grained model allowed us to efficiently traverse a significant portion of chemical space and reduce the sampling time required by molecular dynamics simulations. The physics-informed nature of the applied Martini force field and its level of retained structural detail make the model a suitable starting point for the focused learning of molecular properties.We applied our proposed approach to a cardiolipin bilayer, posing a relevant and challenging problem and facilitating reasonable comparison to experimental measurements.We identified promising molecules with defined properties within the resolution limit of a coarse-grained representation. Furthermore, we were able to bridge the gap from in silico predictions to in vitro and in vivo experiments, supporting the validity of the theoretical concept. The findings underscore the potential of multiscale modeling and free-energy calculations in enhancing molecular discovery and design and offer a promising direction for future research

    Privacy-aware Biometric Blockchain based e-Passport System for Automatic Border Control

    Get PDF
    In the middle of 1990s, World Wide Web technology initially steps into our life. Now, 30 years after that, widespread internet access and established computing technology bring embodied real life into Metaverse by digital twin. Internet is not only blurring the concept of physical distance, but also blurring the edge between the real and virtual world. Another breakthrough in computing is the blockchain, which shifts the root of trust attached to a system administrator to the computational power of the system. Furthermore, its favourable properties such as immutable time-stamped transaction history and atomic smart contracts trigger the development of decentralized autonomous organizations (DAOs). Combining above two, this thesis presents a privacy-aware biometric Blockchain based e-passport system for automatic border control(ABC), which aims for improving the efficiency of existing ABC system. Specifically, through constructing a border control Metaverse DAO, border control workload can be autonomously self-executed by atomic smart contracts as transaction and then immutably recorded on Blockchain. What is more, to digitize border crossing documentation, biometric Blockchain based e-passport system(BBCVID) is created to generate an immutable real-world identity digital twin in the border control Metaverse DAO through Blockchain and biometric identity authentication. That is to say, by digitizing border crossing documentation and automatizing both biometric identity authentication and border crossing documentation verification, our proposal is able to significantly improve existing border control efficiency. Through system simulation and performance evaluation by Hyperledger Caliper, the proposed system turns out to be able to improve existing border control efficiency by 3.5 times more on average, which is remarkable. What is more, the dynamic digital twin constructed by BBCVID enables computing techniques such as machine learning and big data analysis applicable to real-world entity, which has a huge potential to create more value by constructing smarter ABC systems

    Coarse-grained modeling for molecular discovery:Applications to cardiolipin-selectivity

    Get PDF
    The development of novel materials is pivotal for addressing global challenges such as achieving sustainability, technological progress, and advancements in medical technology. Traditionally, developing or designing new molecules was a resource-intensive endeavor, often reliant on serendipity. Given the vast space of chemically feasible drug-like molecules, estimated between 106 - 10100 compounds, traditional in vitro techniques fall short.Consequently, in silico tools such as virtual screening and molecular modeling have gained increasing recognition. However, the computational cost and the limited precision of the utilized molecular models still limit computational molecular design.This thesis aimed to enhance the molecular design process by integrating multiscale modeling and free energy calculations. Employing a coarse-grained model allowed us to efficiently traverse a significant portion of chemical space and reduce the sampling time required by molecular dynamics simulations. The physics-informed nature of the applied Martini force field and its level of retained structural detail make the model a suitable starting point for the focused learning of molecular properties.We applied our proposed approach to a cardiolipin bilayer, posing a relevant and challenging problem and facilitating reasonable comparison to experimental measurements.We identified promising molecules with defined properties within the resolution limit of a coarse-grained representation. Furthermore, we were able to bridge the gap from in silico predictions to in vitro and in vivo experiments, supporting the validity of the theoretical concept. The findings underscore the potential of multiscale modeling and free-energy calculations in enhancing molecular discovery and design and offer a promising direction for future research

    An enhanced fuzzy commitment scheme in biometric template protection

    Get PDF
    Biometric template protection consists of two approaches; Feature Transformation (FT) and Biometric Cryptography (BC). This research focuses on Key-Binding Technique based on Fuzzy Commitment Scheme (FCS) under BC approach. In FCS, the helper data should not disclose any information about the biometric data. However, literatures showed that it had dependency issue in its helper data which jeopardize security and privacy. Moreover, this also increases the probability of privacy leakage which lead to attacks such as brute-force and cross-matching attack. Thus, the aim of this research is to reduce the dependency of helper data that can caused privacy leakage. Three objectives have been set such as (1) to identify the factors that cause dependency on biometric features (2) to enhance FCS by proposing an approach that reduces this dependency, and (3) to evaluate the proposed approach based on parameters such as security, privacy, and biometric performance. This research involved four phases. Phase one, involved research review and analysis, followed by designing conceptual model and algorithm development in phase two and three respectively. Phase four, involved with the evaluation of the proposed approach. The security and privacy analysis shows that with the additional hash function, it is difficult for adversary to perform bruteā€force attack on information stored in database. Furthermore, the proposed approach has enhanced the aspect of unlinkability and prevents cross-matching attack. The proposed approach has achieved high accuracy of 95.31% with Equal Error Rate (EER) of 1.54% which performs slightly better by 1.42% compared to the existing approach. This research has contributed towards the key-binding technique of biometric fingerprint template protection, based on FCS. In particular, this research was designed to create a secret binary feature that can be used in other state-of-the-art cryptographic systems by using an appropriate error-correcting approach that meets security standards

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue ā€œAdvances in Artificial Intelligence: Models, Optimization, and Machine Learningā€ of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications

    Artificial Intelligence and Cognitive Computing

    Get PDF
    Artificial intelligence (AI) is a subject garnering increasing attention in both academia and the industry today. The understanding is that AI-enhanced methods and techniques create a variety of opportunities related to improving basic and advanced business functions, including production processes, logistics, financial management and others. As this collection demonstrates, AI-enhanced tools and methods tend to offer more precise results in the fields of engineering, financial accounting, tourism, air-pollution management and many more. The objective of this collection is to bring these topics together to offer the reader a useful primer on how AI-enhanced tools and applications can be of use in todayā€™s world. In the context of the frequently fearful, skeptical and emotion-laden debates on AI and its value added, this volume promotes a positive perspective on AI and its impact on society. AI is a part of a broader ecosystem of sophisticated tools, techniques and technologies, and therefore, it is not immune to developments in that ecosystem. It is thus imperative that inter- and multidisciplinary research on AI and its ecosystem is encouraged. This collection contributes to that

    Privacy-Preserving Biometric Authentication

    Full text link
    Biometric-based authentication provides a highly accurate means of authentication without requiring the user to memorize or possess anything. However, there are three disadvantages to the use of biometrics in authentication; any compromise is permanent as it is impossible to revoke biometrics; there are significant privacy concerns with the loss of biometric data; and humans possess only a limited number of biometrics, which limits how many services can use or reuse the same form of authentication. As such, enhancing biometric template security is of significant research interest. One of the methodologies is called cancellable biometric template which applies an irreversible transformation on the features of the biometric sample and performs the matching in the transformed domain. Yet, this is itself susceptible to specific classes of attacks, including hill-climb, pre-image, and attacks via records multiplicity. This work has several outcomes and contributions to the knowledge of privacy-preserving biometric authentication. The first of these is a taxonomy structuring the current state-of-the-art and provisions for future research. The next of these is a multi-filter framework for developing a robust and secure cancellable biometric template, designed specifically for fingerprint biometrics. This framework is comprised of two modules, each of which is a separate cancellable fingerprint template that has its own matching and measures. The matching for this is based on multiple thresholds. Importantly, these methods show strong resistance to the above-mentioned attacks. Another of these outcomes is a method that achieves a stable performance and can be used to be embedded into a Zero-Knowledge-Proof protocol. In this novel method, a new strategy was proposed to improve the recognition error rates which is privacy-preserving in the untrusted environment. The results show promising performance when evaluated on current datasets

    Analysing retinal images using (extended) persistent homology

    Get PDF
    Biometrics are data used for the automatic verification and identification of individuals, two important routines commonly performed to enhance the level of security within a system. Therefore, improvements to the analysis of biometrics are crucial. Common examples of biometrics include fingerprints and facial features. In this thesis, we consider retinal fundus images, which are scans of a person's retina blood vessels at the back of the eyeballs. They have become a popular choice for these tasks due to their uniqueness and stability over time. Traditional methods mainly utilise specific biological features observed in the scans. These methods generally rely on highly accurate automated extractions of these traits, which are challenging to produce especially when abnormalities appear in diseased individuals. In this paper, we instead propose a novel approach, which is more tolerant of the errors from the feature extraction process, to analyse retina biometrics. In particular, we compute the \emph{(extended) persistent homology} of the blood vessel structure (viewed as a manifold with boundary embedded in R2\R^2) in a retinal image with respect to some filtration and produce a summary statistic called a \emph{persistence diagram}. This then allows us to perform further statistical tests. We test our method on a publicly available database using different filtrations choices to capture the vessels' shapes. Some of these choices achieve a high level of accuracy compared with tests done on the same database. Our method also takes significantly less time compared to other proposed methods. In the future, we can explore more filtrations and/or use combinations of results obtained from different filtrations to see if we can further increase accuracy
    • ā€¦
    corecore