2,344 research outputs found

    Iris Recognition System Using Support Vector Machines

    Get PDF
    In recent years, with the increasing demands of security in our networked society, biometric systems for user verification are becoming more popular. Iris recognition system is a new technology for user verification. In this paper, the CASIA iris database is used for individual user’s verification by using support vector machines (SVMs) which based on the analysis of iris code as feature extraction is discussed. This feature is then used to recognize authentic users and to reject impostors. Support Vector Machines (SVMs) technique was used for the classification process. The proposed method is evaluated based upon False Rejection Rate (FRR) and False Acceptance Rate (FAR) and the experimental result show that this technique produces good performance

    A Comparison Analysis of BLE-Based Algorithms for Localization in Industrial Environments

    Get PDF
    Proximity beacons are small, low-power devices capable of transmitting information at a limited distance via Bluetooth low energy protocol. These beacons are typically used to broadcast small amounts of location-dependent data (e.g., advertisements) or to detect nearby objects. However, researchers have shown that beacons can also be used for indoor localization converting the received signal strength indication (RSSI) to distance information. In this work, we study the effectiveness of proximity beacons for accurately locating objects within a manufacturing plant by performing extensive experiments in a real industrial environment. To this purpose, we compare localization algorithms based either on trilateration or environment fingerprinting combined with a machine-learning based regressor (k-nearest neighbors, support-vector machines, or multi-layer perceptron). Each algorithm is analyzed in two different types of industrial environments. For each environment, various configurations are explored, where a configuration is characterized by the number of beacons per square meter and the density of fingerprint points. In addition, the fingerprinting approach is based on a preliminary site characterization; it may lead to location errors in the presence of environment variations (e.g., movements of large objects). For this reason, the robustness of fingerprinting algorithms against such variations is also assessed. Our results show that fingerprint solutions outperform trilateration, showing also a good resilience to environmental variations. Given the similar error obtained by all three fingerprint approaches, we conclude that k-NN is the preferable algorithm due to its simple deployment and low number of hyper-parameters

    A comparison analysis of ble-based algorithms for localization in industrial environments

    Get PDF
    Proximity beacons are small, low-power devices capable of transmitting information at a limited distance via Bluetooth low energy protocol. These beacons are typically used to broadcast small amounts of location-dependent data (e.g., advertisements) or to detect nearby objects. However, researchers have shown that beacons can also be used for indoor localization converting the received signal strength indication (RSSI) to distance information. In this work, we study the effectiveness of proximity beacons for accurately locating objects within a manufacturing plant by performing extensive experiments in a real industrial environment. To this purpose, we compare localization algorithms based either on trilateration or environment fingerprinting combined with a machine-learning based regressor (k-nearest neighbors, support-vector machines, or multi-layer perceptron). Each algorithm is analyzed in two different types of industrial environments. For each environment, various configurations are explored, where a configuration is characterized by the number of beacons per square meter and the density of fingerprint points. In addition, the fingerprinting approach is based on a preliminary site characterization; it may lead to location errors in the presence of environment variations (e.g., movements of large objects). For this reason, the robustness of fingerprinting algorithms against such variations is also assessed. Our results show that fingerprint solutions outperform trilateration, showing also a good resilience to environmental variations. Given the similar error obtained by all three fingerprint approaches, we conclude that k-NN is the preferable algorithm due to its simple deployment and low number of hyper-parameters

    Source identification for mobile devices, based on wavelet transforms combined with sensor imperfections

    Get PDF
    One of the most relevant applications of digital image forensics is to accurately identify the device used for taking a given set of images, a problem called source identification. This paper studies recent developments in the field and proposes the mixture of two techniques (Sensor Imperfections and Wavelet Transforms) to get better source identification of images generated with mobile devices. Our results show that Sensor Imperfections and Wavelet Transforms can jointly serve as good forensic features to help trace the source camera of images produced by mobile phones. Furthermore, the model proposed here can also determine with high precision both the brand and model of the device

    Bi-Modal System Using SVM (Support Vector Machine) and MLP (Multilayer Perceptron) -Proposed

    Get PDF
    In this era of high technological advancement, the need to identify an individual especially in the developing countries has long been an attractive goal. The necessity to secure an environments, devices and resources due to the increasing rate of crime has led to the proposal of this research. Iris modality has become interesting as an alternative approach to reliable visual recognition of persons due to its distinctive characteristics, as well as fingerprint modality for its innumerable advantages. Therefore a bi-modal biometric system using Qualitative SVM (Support Vector Machine) and MLP (Multilayer Perceptron) for classification has been proposed in this research. Performance analysis of these modalities will be carried out with each model. The designed models will be duly implemented using JAVA programming Language as a frontend and Access database as a backend respectively. Keywords: Biometric, Bimodal system, Iris modality, fingerprint modality, Support Vector Machine and Multilayer Perceptron

    Fingerprint Liveness Detection using Minutiae-Independent Dense Sampling of Local Patches

    Full text link
    Fingerprint recognition and matching is a common form of user authentication. While a fingerprint is unique to each individual, authentication is vulnerable when an attacker can forge a copy of the fingerprint (spoof). To combat these spoofed fingerprints, spoof detection and liveness detection algorithms are currently being researched as countermeasures to this security vulnerability. This paper introduces a fingerprint anti-spoofing mechanism using machine learning.Comment: Submitted, peer-reviewed, accepted, and under publication with Springer Natur

    A composable approach to design of newer techniques for large-scale denial-of-service attack attribution

    Get PDF
    Since its early days, the Internet has witnessed not only a phenomenal growth, but also a large number of security attacks, and in recent years, denial-of-service (DoS) attacks have emerged as one of the top threats. The stateless and destination-oriented Internet routing combined with the ability to harness a large number of compromised machines and the relative ease and low costs of launching such attacks has made this a hard problem to address. Additionally, the myriad requirements of scalability, incremental deployment, adequate user privacy protections, and appropriate economic incentives has further complicated the design of DDoS defense mechanisms. While the many research proposals to date have focussed differently on prevention, mitigation, or traceback of DDoS attacks, the lack of a comprehensive approach satisfying the different design criteria for successful attack attribution is indeed disturbing. Our first contribution here has been the design of a composable data model that has helped us represent the various dimensions of the attack attribution problem, particularly the performance attributes of accuracy, effectiveness, speed and overhead, as orthogonal and mutually independent design considerations. We have then designed custom optimizations along each of these dimensions, and have further integrated them into a single composite model, to provide strong performance guarantees. Thus, the proposed model has given us a single framework that can not only address the individual shortcomings of the various known attack attribution techniques, but also provide a more wholesome counter-measure against DDoS attacks. Our second contribution here has been a concrete implementation based on the proposed composable data model, having adopted a graph-theoretic approach to identify and subsequently stitch together individual edge fragments in the Internet graph to reveal the true routing path of any network data packet. The proposed approach has been analyzed through theoretical and experimental evaluation across multiple metrics, including scalability, incremental deployment, speed and efficiency of the distributed algorithm, and finally the total overhead associated with its deployment. We have thereby shown that it is realistically feasible to provide strong performance and scalability guarantees for Internet-wide attack attribution. Our third contribution here has further advanced the state of the art by directly identifying individual path fragments in the Internet graph, having adopted a distributed divide-and-conquer approach employing simple recurrence relations as individual building blocks. A detailed analysis of the proposed approach on real-life Internet topologies with respect to network storage and traffic overhead, has provided a more realistic characterization. Thus, not only does the proposed approach lend well for simplified operations at scale but can also provide robust network-wide performance and security guarantees for Internet-wide attack attribution. Our final contribution here has introduced the notion of anonymity in the overall attack attribution process to significantly broaden its scope. The highly invasive nature of wide-spread data gathering for network traceback continues to violate one of the key principles of Internet use today - the ability to stay anonymous and operate freely without retribution. In this regard, we have successfully reconciled these mutually divergent requirements to make it not only economically feasible and politically viable but also socially acceptable. This work opens up several directions for future research - analysis of existing attack attribution techniques to identify further scope for improvements, incorporation of newer attributes into the design framework of the composable data model abstraction, and finally design of newer attack attribution techniques that comprehensively integrate the various attack prevention, mitigation and traceback techniques in an efficient manner

    Low-Quality Fingerprint Classification

    Get PDF
    Traditsioonilised sõrmejälgede tuvastamise süsteemid kasutavad otsuste tegemisel minutiae punktide informatsiooni. Nagu selgub paljude varasemate tööde põhjal, ei ole sõrmejälgede pildid mitte alati piisava kvaliteediga, et neid saaks kasutada automaatsetes sõrmejäljetuvastuse süsteemides. Selle takistuse ületamiseks keskendub magistritöö väga madala kvaliteediga sõrmejälgede piltide tuvastusele – sellistel piltidel on mitmed üldteada moonutused, nagu kuivus, märgus, füüsiline vigastatus, punktide olemasolu ja hägusus. Töö eesmärk on välja töötada efektiivne ja kõrge täpsusega sügaval närvivõrgul põhinev algoritm, mis tunneb sõrmejälje ära selliselt madala kvaliteediga pildilt. Eksperimentaalsed katsed sügavõppepõhise meetodiga näitavad kõrget tulemuslikkust ja robustsust, olles rakendatud praktikast kogutud madala kvaliteediga sõrmejälgede andmebaasil. VGG16 baseeruv sügavõppe närvivõrk saavutas kõrgeima tulemuslikkuse kuivade (93%) ja madalaima tulemuslikkuse häguste (84%) piltide klassifitseerimisel.Fingerprint recognition systems mainly use minutiae points information. As shown in many previous research works, fingerprint images do not always have good quality to be used by automatic fingerprint recognition systems. To tackle this challenge, in this thesis, we are focusing on very low-quality fingerprint images, which contain several well-known distortions such as dryness, wetness, physical damage, presence of dots, and blurriness. We develop an efficient, with high accuracy, deep neural network algorithm, which recognizes such low-quality fingerprints. The experimental results have been conducted on real low-quality fingerprint database, and the achieved results show the high performance and robustness of the introduced deep network technique. The VGG16 based deep network achieves the highest performance of 93% for dry and the lowest of 84% for blurred fingerprint classes

    A survey of fingerprint classification Part I: taxonomies on feature extraction methods and learning models

    Get PDF
    This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.This work was supported by the Research Projects CAB(CDTI), TIN2011-28488, and TIN2013-40765-P.
    corecore