4,394 research outputs found

    Face verification system architecture using smart cards

    Full text link
    A smart card based face verification system is pro-posed in which the feature extraction and decision mak-ing is performed on the card. Such an architecture has many privacy and security benefits. As smart cards are limited computational platforms, the face verifica-tion algorithms have to be adapted to limit the facial image representations. This minimises the information needed to be sent to the card and lessens the computa-tional load of the template matching. Studies performed on the BANCA and XM2VTS databases demonstrate that by limiting these representations the verification perfor-mance of the system is not degraded and that the pro-posed architecture is a viable one. 1

    MULTI-MODEL BIOMETRICS AUTHENTICATION FRAMEWORK

    Get PDF
    Authentication is the process to conform the truth of an attribute claimed by real entity. Biometric technology is widely useful for the process of authentication. Today, biometric is becoming a key aspect in a multitude of applications. So this paper proposed the applications of such a multimodal biometric authentication system. Proposed system establishes a real time authentication framework using multi-model biometrics which consists of the embedded system verify the signatures, fingerprint and key pattern to authenticate the user. This is one of the most reliable, fast and cost effective tool for the user authentication

    TLAD 2011 Proceedings:9th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the ninth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2011), which once again is held as a workshop of BNCOD 2011 - the 28th British National Conference on Databases. TLAD 2011 is held on the 11th July at Manchester University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, databases and the cloud, and novel uses of technology in teaching and assessment. It is expected that these papers will stimulate discussion at the workshop itself and beyond. This year, the focus on providing a forum for discussion is enhanced through a panel discussion on assessment in database modules, with David Nelson (of the University of Sunderland), Al Monger (of Southampton Solent University) and Charles Boisvert (of Sheffield Hallam University) as the expert panel

    TLAD 2011 Proceedings:9th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the ninth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2011), which once again is held as a workshop of BNCOD 2011 - the 28th British National Conference on Databases. TLAD 2011 is held on the 11th July at Manchester University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, databases and the cloud, and novel uses of technology in teaching and assessment. It is expected that these papers will stimulate discussion at the workshop itself and beyond. This year, the focus on providing a forum for discussion is enhanced through a panel discussion on assessment in database modules, with David Nelson (of the University of Sunderland), Al Monger (of Southampton Solent University) and Charles Boisvert (of Sheffield Hallam University) as the expert panel

    Algorithms in future capital markets: A survey on AI, ML and associated algorithms in capital markets

    Get PDF
    This paper reviews Artificial Intelligence (AI), Machine Learning (ML) and associated algorithms in future Capital Markets. New AI algorithms are constantly emerging, with each 'strain' mimicking a new form of human learning, reasoning, knowledge, and decisionmaking. The current main disrupting forms of learning include Deep Learning, Adversarial Learning, Transfer and Meta Learning. Albeit these modes of learning have been in the AI/ML field more than a decade, they now are more applicable due to the availability of data, computing power and infrastructure. These forms of learning have produced new models (e.g., Long Short-Term Memory, Generative Adversarial Networks) and leverage important applications (e.g., Natural Language Processing, Adversarial Examples, Deep Fakes, etc.). These new models and applications will drive changes in future Capital Markets, so it is important to understand their computational strengths and weaknesses. Since ML algorithms effectively self-program and evolve dynamically, financial institutions and regulators are becoming increasingly concerned with ensuring there remains a modicum of human control, focusing on Algorithmic Interpretability/Explainability, Robustness and Legality. For example, the concern is that, in the future, an ecology of trading algorithms across different institutions may 'conspire' and become unintentionally fraudulent (cf. LIBOR) or subject to subversion through compromised datasets (e.g. Microsoft Tay). New and unique forms of systemic risks can emerge, potentially coming from excessive algorithmic complexity. The contribution of this paper is to review AI, ML and associated algorithms, their computational strengths and weaknesses, and discuss their future impact on the Capital Markets

    Resilient Infrastructure and Building Security

    Get PDF

    Optimising multimodal fusion for biometric identification systems

    Get PDF
    Biometric systems are automatic means for imitating the human brain’s ability of identifying and verifying other humans by their behavioural and physiological characteristics. A system, which uses more than one biometric modality at the same time, is known as a multimodal system. Multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically provide better recognition performance compared to systems based on a single biometric modality. This thesis addresses some issues related to the implementation of multimodal biometric identity verification systems. The thesis assesses the feasibility of using commercial offthe-shelf products to construct deployable multimodal biometric system. It also identifies multimodal biometric fusion as a challenging optimisation problem when one considers the presence of several configurations and settings, in particular the verification thresholds adopted by each biometric device and the decision fusion algorithm implemented for a particular configuration. The thesis proposes a novel approach for the optimisation of multimodal biometric systems based on the use of genetic algorithms for solving some of the problems associated with the different settings. The proposed optimisation method also addresses some of the problems associated with score normalization. In addition, the thesis presents an analysis of the performance of different fusion rules when characterising the system users as sheep, goats, lambs and wolves. The results presented indicate that the proposed optimisation method can be used to solve the problems associated with threshold settings. This clearly demonstrates a valuable potential strategy that can be used to set a priori thresholds of the different biometric devices before using them. The proposed optimisation architecture addressed the problem of score normalisation, which makes it an effective “plug-and-play” design philosophy to system implementation. The results also indicate that the optimisation approach can be used for effectively determining the weight settings, which is used in many applications for varying the relative importance of the different performance parameters
    • …
    corecore