4 research outputs found

    Two-Factor Authentication or How to Potentially Counterfeit Experimental Results in Biometric Systems

    No full text
    Two-factor authentication has been introduced in order to enhance security in authentication systems. Different factors have been introduced, which are combined for means of controlling access. The increasing demand for high security applications has led to a growing interest in biometrics. As a result several two-factor authentication systems are designed to include biometric authentication. In this work the risk of result distortion during performance evaluations of two-factor authentication systems including biometrics is pointed out. Existing approaches to two-factor biometric authentication systems are analyzed. Based on iris biometrics a case study is presented, which demonstrates the trap of untruly increasing recognition rates by introducing a second authentication factor to a biometric authentication system. Consequently, several requirements for performance evaluations of two-factor biometric authentication systems are stated

    An Empirical Study of Authentication Methods to Secure E-learning System Activities Against Impersonation Fraud

    Get PDF
    Studies have revealed that securing Information Systems (IS) from intentional misuse is a concern among organizations today. The use of Web-based systems has grown dramatically across industries including e-commerce, e-banking, e-government, and e learning to name a few. Web-based systems provide e-services through a number of diverse activities. The demand for e-learning systems in both academic and non-academic organizations has increased the need to improve security against impersonation fraud. Although there are a number of studies focused on securing Web-based systems from Information Systems (IS) misuse, research has recognized the importance of identifying suitable levels of authenticating strength for various activities. In e-learning systems, it is evident that due to the variation in authentication strength among controls, a ‘one size fits all’ solution is not suitable for securing diverse e-learning activities against impersonation fraud. The main goal of this study was to use the framework of the Task-Technology Fit (TTF) theory to conduct an exploratory research design to empirically investigate what levels of authentication strength users perceive to be most suitable for activities in e-learning systems against impersonation fraud. This study aimed to assess if the ‘one size fits all’ approach mainly used nowadays is valid when it comes to securing e-learning activities from impersonation fraud. Following the development of an initial survey instrument (Phase 1), expert panel feedback was gathered for instrument validity using the Delphi methodology. The initial survey instrument was adjusted according to feedback (Phase 2). The finalized Web-based survey was used to collect quantitative data for final analyses (Phase 3). This study reported on data collected from 1,070 e-learners enrolled at a university. Descriptive statistics was used to identify what e-learning activities perceived by users and what users perceived that their peers would identify to have a high potential for impersonation. The findings determined there are a specific set of e-learning activities that high have potential for impersonation fraud and need a moderate to high level of authentication strength to reduce the threat. Principal Component Analysis was used to identify significant components of authentication strength to be suitable against the threats of impersonation for e-learning activities

    A dissimilarity representation approach to designing systems for signature verification and bio-cryptography

    Get PDF
    Automation of legal and financial processes requires enforcing of authenticity, confidentiality, and integrity of the involved transactions. This Thesis focuses on developing offline signature verification (OLSV) systems for enforcing authenticity of transactions. In addition, bio-cryptography systems are developed based on the offline handwritten signature images for enforcing confidentiality and integrity of transactions. Design of OLSV systems is challenging, as signatures are behavioral biometric traits that have intrinsic intra-personal variations and inter-personal similarities. Standard OLSV systems are designed in the feature representation (FR) space, where high-dimensional feature representations are needed to capture the invariance of the signature images. With the numerous users, found in real world applications, e.g., banking systems, decision boundaries in the high-dimensional FR spaces become complex. Accordingly, large number of training samples are required to design of complex classifiers, which is not practical in typical applications. In contrast, design of bio-cryptography systems based on the offline signature images is more challenging. In these systems, signature images lock the cryptographic keys, and a user retrieves his key by applying a query signature sample. For practical bio-cryptographic schemes, the locking feature vector should be concise. In addition, such schemes employ simple error correction decoders, and therefore no complex classification rules can be employed. In this Thesis, the challenging problems of designing OLSV and bio-cryptography systems are addressed by employing the dissimilarity representation (DR) approach. Instead of designing classifiers in the feature space, the DR approach provides a classification space that is defined by some proximity measure. This way, a multi-class classification problem, with few samples per class, is transformed to a more tractable two-class problem with large number of training samples. Since many feature extraction techniques have already been proposed for OLSV applications, a DR approach based on FR is employed. In this case, proximity between two signatures is measured by applying a dissimilarity measure on their feature vectors. The main hypothesis of this Thesis is as follows. The FRs and dissimilarity measures should be properly designed, so that signatures belong to same writer are close, while signatures of different writers are well separated in the resulting DR spaces. In that case, more cost-effecitive classifiers, and therefore simpler OLSV and bio-cryptography systems can be designed. To this end, in Chapter 2, an approach for optimizing FR-based DR spaces is proposed such that concise representations are discriminant, and simple classification thresholds are sufficient. High-dimensional feature representations are translated to an intermediate DR space, where pairwise feature distances are the space constituents. Then, a two-step boosting feature selection (BFS) algorithm is applied. The first step uses samples from a development database, and aims to produce a universal space of reduced dimensionality. The resulting universal space is further reduced and tuned for specific users through a second BFS step using user-specific training set. In the resulting space, feature variations are modeled and an adaptive dissimilarity measure is designed. This measure generates the final DR space, where discriminant prototypes are selected for enhanced representation. The OLSV and bio-cryptographic systems are formulated as simple threshold classifiers that operate in the designed DR space. Proof of concept simulations on the Brazilian signature database indicate the viability of the proposed approach. Concise DRs with few features and a single prototype are produced. Employing a simple threshold classifier, the DRs have shown state-of-the-art accuracy of about 7% AER, comparable to complex systems in the literature. In Chapter 3, the OLSV problem is further studied. Although the aforementioned OLSV implementation has shown acceptable recognition accuracy, the resulting systems are not secure as signature templates must be stored for verification. For enhanced security, we modified the previous implementation as follows. The first BFS step is implemented as aforementioned, producing a writer-independent (WI) system. This enables starting system operation, even if users provide a single signature sample in the enrollment phase. However, the second BFS is modified to run in a FR space instead of a DR space, so that no signature templates are used for verification. To this end, the universal space is translated back to a FR space of reduced dimensionality, so that designing a writer-dependent (WD) system by the few user-specific samples is tractable in the reduced space. Simulation results on two real-world offline signature databases confirm the feasibility of the proposed approach. The initial universal (WI) verification mode showed comparable performance to that of state-of-the-art OLSV systems. The final secure WD verification mode showed enhanced accuracy with decreased computational complexity. Only a single compact classifier produced similar level of accuracy (AER of about 5.38 and 13.96% for the Brazilian and the GPDS signature databases, respectively) as complex WI and WD systems in the literature. Finally, in Chapter 4, a key-binding bio-cryptographic scheme known as the fuzzy vault (FV) is implemented based on the offline signature images. The proposed DR-based two-step BFS technique is employed for selecting a compact and discriminant user-specific FR from a large number of feature extractions. This representation is used to generate the FV locking/unlocking points. Representation variability modeled in the DR space is considered for matching the unlocking and locking points during FV decoding. Proof of concept simulations on the Brazilian signature database have shown FV recognition accuracy of 3% AER and system entropy of about 45-bits. For enhanced security, an adaptive chaff generation method is proposed, where the modeled variability controls the chaff generation process. Similar recognition accuracy is reported, where more enhanced entropy of about 69-bits is achieved
    corecore